Session – WPF: Extensible BitmapEffects, Pixel Shaders, and WPF Graphics Futures

PDC 2008, Day #4, Session #4, 1 hr 15 mins

David Teitlebaum
Program Manager
WPF Team

My final session at PDC 2008 was a talk about the improvements in WPF graphics that are available in .NET Framework 3.5 SP1.  David also touched briefly some possible future features (i.e. that would appear in .NET Framework 4.0).

David’s main topic was to walk through the details of the new Shader Effects model, which replaces the old Bitmap Effects feature.

What are Bitmap Effects?

These are effects that are applied to an individual UI element, like a button, to create some desired visual effect.  This includes things like drop shadow, bevels and blur effects.


The BitmapEffect object was introduced in Framework 3.0 (the first WPF release).  But there were some problems with it, that led to now replacing it with Shader Effects in 3.5SP1.

Problems with BitmapEffect:

  • They were rendered in software
  • Blur operations were very slow
  • There were various limitations, including no ClearType support, no anisotripic filtering, etc.

New Shader Effects

Basic characteristics in the new Shader Effects include:

  • GPU accelerated
  • Have implemented hardware acceleration of the most popular bitmap effects
    • But did not implement outer glow
  • Can author custom hardware-accelerated bitmap effects using HLSL
  • There is a software-only fallback pipeline that is actually faster than the old Bitmap Effects
  • New Shader Effects run on most video cards
    • Require PixelShader 2.0, which is about 5 years old

How Do You Do Shader Effects?

Here’s an outline of how you use the new Shader Effect model:

  • Derive a custom class from the new ShaderEffect class (which derives from Effect)
  • You write your actual pixel shader code in HLSL, which is used for doing custom hardware-accelerated stuff using Direct3D
    • Language is C-like
    • Compiled to byte-code, consumed by video driver, runs on GPU
  • Some more details about HLSL, as used in WPF
    • DirectX 10 supports HLSL 4.0
    • WPF currently only supports Pixelshader 2.0

So what do pixel shaders really do?  They basically take in a texture (bitmap) as input, do some processing on each point, and return a revised texture as an output.

Basically, you have a main function that accepts the coordinates of the current single pixel to be mapped.  Your code then accesses the original input texture through a register, so it just uses the input parameter (X/Y coordinate) to index into the source texture.  It then does some processing on the pixel in question and returns a color value.  This resultant color value just represents—the resulting RGB color at the specified coordinate.

The final step is to create, in managed code, a class that derives from ShaderEffect and hook it up to the pixel shader code (e.g. file) that you wrote.  You can then apply your shader to any WPF UIElement using XAML.  (By setting the Effect property).

Direct3D Interop

David’s next topic was to talk a bit about interop’ing with Direct3D.  This just means that your WPF application can easily host Direct3D content by using a new class called D3DImage.

This was pretty cool.  David demoed displaying a Direct3D wireframe in the background (WPF 3D subsystem can’t do wireframes), with WPF GUI elements in the foreground, overlaying the background image.

The basic idea is that you create a Direct3D device in unmanaged code and then hook it to a new instance of a WPF D3DImage element, which you include in your visual hierarchy.

WPF Futures

Finally, David touched very briefly on some possible future features.  These are things that may show up in WPF 4.0 (.NET Framework 4.0), or possibly beyond that.

Some of the features likely included in WPF 4.0 include:

  • Increased graphical richness  (e.g. Pixelshader 3.0)
  • Offloading more work to the GPU
  • Better rendering quality
    • Integrate DirectWrite for text clarity
    • Layout rounding

And some of the possible post-4.0 features include:

  • Better exploitation of hardware
  • Vertex shaders
  • Shader groups
  • Shaders in WPF 3D
  • 3D improvements
  • Better media extensibility


You can get at David’s PDC08 slide deck for this talk here:

And you can find full video from the session at:

Session – Windows 7: Unlocking the GPU with Direct3D

PDC 2008, Day #4, Session #3, 1 hr 15 mins

Allison Klein

I jumped off the Azure track (starting to be a bit repetitive) and next went to a session focused on Direct3D.

Despite the title, this session really had nothing to do with Windows 7, other than the fact that it talked a  lot about Direct3D 11, which will be included in Windows 7 and available for Windows Vista.

Direct3D 10

Direct3D 10 is the currently shipping version, and supported by most (all?) modern video cards, as well as integrated graphics chips.  I’m not entirely sure, but I think that Direct3D 10 shipped out-of-the-box with Windows Vista.  It is also available for Windows XP.

Allison spent about half of the talk going through things that are different in Direct3D 10, as compared with Direct3D 9.

I’m not inclined to rehash all of the details.  (I’ll include a link to Allison’s slide deck when it is available).

The main takeaway was that it’s very much worth programming to the v10 API, as opposed to the v9 API.  Some of the reasons for this include:

  • Much more consistent behavior, across devices
  • Cleaner API
  • Elimination of large CAPS (device capability) matrix, for a more consistent experience across devices
  • Built-in driver that allows D3D10 to talk to D3D9 hardware
  • Addition of WARP 10 software rasterizer, to support devices that don’t support WDDM directly.  This is actually quite a bit faster than earlier software-only implementations

Direct3D 11

In the second half of her talk, Allison talked about the advances coming in Direct3D 11.  She mentioned that D3D11 will ship with Windows 7 and also be available for Windows Vista.

Again, the details are probably more appropriate for a game developer.  (See the slide deck).  But the high level points are:

  • Direct3D 11 is a strict superset of 10—there are no changes to existing 10 features
  • Better support for character authoring, for denser meshes and more detailed characters
  • Addition of tessellation to the rendering pipeline, for better performance and quality
  • Much more scalable multi-threading support.
    • Much more flexibility in what can be distributed across threads
  • Dynamically linking in custom shaders
  • Introduction of object-oriented features (interfaces/classes) to HLSL
  • New block compression
  • Direct3D11 will be available in the Nov 2008 SDK


Finally, Allison touched briefly on some future directions that the Direct3D is thinking about.

The main topic that she talked about here was in potentially using the GPU to perform highly parallel general purpose compute intensive tasks.  The developer would use HLSL to write a “compute shader”, which would then get sent to the GPU to do the work.  As an example, she talked about using this mechanism for post-processing of an image.

Session – Services Symposium: Enterprise Grade Cloud Applications

PDC 2008, Day #4, Session #2, 1 hr 30 mins

Eugenio Pace

My second session on Thursday was a continuation of the cloud services symposium from the first session.  There was a third part to the symposium, which I did not attend.

The presenter for this session, Eugenio, was not nearly as good a presenter as Gianpaolo from the previous session.  So it was a bit less dynamic, and harder to stay interested.

The session basically consisted of a single demo, which illustrated some of the possible solutions to the Identity, Monitoring, and Integration challenges mentioned in the previous session.


Eugenio pointed out the problems involved in authentication/authorization.  You don’t want to require the enterprise users to have a unique username/password combination for each service that they use.  And pushing out the enterprise (e.g. Active Directory) credential information to the third party service is not secure and creates a management nightmare.

The proposed solution is to use a central (federated) identity system to do the authentication and authorization.  This is the purpose of the Azure Access Control service.


The next part of the demo showed how Azure supports remote management, on the part of IT staff at an individual customer site, of their instance of your application.  The basic things that you can do remotely include:

  • Active real-time monitoring of application health
  • Trigger administrative actions, based on the current state

The end result (and goal) is that you have the same scope of control over your application as you’d have if it were on premises.

Application Integration

Finally, Eugenio did some demos related to “process integration”—allowing your service to be called from a legacy service or system.  This demo actually woke everyone up, because Eugenio brought an archaic green-screen AS400 system up in an emulator and proceeded to have it talk to his Azure service.


The conclusions were recommendations to both IT organizations and ISVs:

  • Enterprise IT organization
    • Don’t settle for sub-optimal solutions
    • Tap into the benefits of Software+Services
  • ISV
    • Don’t give them an excuse to reject your solution
    • Make use of better tools, frameworks, and services

Session – Services Symposium: Expanding Applications to the Cloud

PDC 2008, Day #4, Session #1, 1 hr 30 mins

Gianpaolo Carraro

As the last day of PDC starts, I’m down to four sessions to go.  I’ll continue doing a quick blog post on each session, where I share my notes, as well as some miscellaneous thoughts.

The Idea of a Symposium

Gianpaolo started out by explaining that they were finishing PDC by doing a pair of symposiums, each a series of three different sessions.  One symposium focused on parallel computing and the other on cloud-based services.  This particular session was the first in the set of three that addressed cloud services.

The idea of a symposium, explained Gianpaolo, is to take all of the various individual technologies and try to sort of fit the puzzle pieces together, providing a basic context.

The goal was also present some of the experience that Microsoft has gained in early usage of the Azure platform over the past 6-12 months.  He said that he himself has spent the last 6-12 months using the new Services, so he had some thoughts to share.

This first session in the symposium focused on taking existing business applications and expanding them to “the cloud”.  When should an ISV do this?  Why?  How?

Build vs. Buy and On-Premises vs. Cloud

Gianpaolo presented a nice matrix showing the two basic independent decisions that you face when looking for software to fulfill a need.

  • Build vs. Buy – Can I buy a packaged off-the-shelf product that does what I need?  Or are my needs specialized enough that I need to build my own stuff?
  • On-Premises vs. Cloud – Should I run this software on my own servers?  Or host everything up in “the cloud”?

There are, of course, tradeoffs on both sides of each decision.  These have been discussed ad infinitum elsewhere, but the basic tradeoffs are:

  • Build vs. Buy – Features vs. Cost
  • On-Premises vs. Cloud – Control vs. Economy of Scale

Here’s the graph that Gianpaolo presented, showing six different classes of software, based on how you answer these questions.  Note that on the On-Premises vs. Cloud scale, there is a middle column that represents taking applications that you essentially control and moving them to co-located servers.

This is a nice way to look at things.  It shows that, for each individual software function, it can live anywhere on this graph.  In fact, Gianpaolo’s main point is that you can deploy different pieces of your solution at different spots on the graph.

So the idea is that while you might start off on-premises, you can push your solution out to either a co-located hosting server or to the cloud in general.  This is true of both packaged apps as well as custom-developed software.


The main challenge in moving things out of the enterprise is dealing with the various issues that show up now when your data needs to cross the corporate/internet boundary.

There are several separate types of challenges that show up:

  • Identify challenges – as you move across various boundaries, how does the software know who you are and what you’re allowed to access?
  • Monitoring and Management challenges – how do you know if your application is healthy, if it’s running out in the cloud?
  • Application Integration challenge – how do various applications communicate with each other, across the various boundaries?

Solutions to the Identity Problem

Gianpaolo proposed the following possible solutions to this problem of identity moving across the different boundaries:

  • Federated ID
  • Claim-based access control
  • Geneva identity system, or Cardspace

The basic idea was that Microsoft has various assets that can help with this problem.

Solutions to the Monitoring and Management Problem

Next, the possible solutions to the monitoring and management problem included:

  • Programmatic access to a “Health” model
  • Various management APIs
  • Firewall-friendly protocols
  • Powershell support

Solutions to the Application Integration Problem

Finally, some of the proposed solutions to the application integration problem included:

  • ServiceBus
  • Oslo
  • Azure storage
  • Sync framework

The ISV Perspective

The above issues were all from an IT perspective.  But you can look at the same landscape from the perspective of an independent software vendor, trying to sell solutions to the enterprise.

To start with, there are two fundamentally different ways that the ISV can make use of “the cloud”:

  • As a service mechanism, for delivering your services via the cloud
    • You make your application’s basic services available over the internet, no matter where it is hosted
    • This is mostly a customer choice, based on where they want to deploy
  • As a platform
    • Treating the cloud as a platform, where your app runs
    • Benefits are the economy of scale
    • Mostly an ISV choice
    • E.g. you could use Azure without your customer even being aware of it

When delivering your software as a service, you need to consider things like:

  • Is the feature set available via cloud sufficient?
  • Firewall issues
  • Need a management interface for your customers

Some Patterns

Gianpaolo presented some miscellaneous design considerations and patterns that might apply to applications deployed in the cloud.


  • Design for average load, handling the ‘peak’ as an exception
  • I.e. only go to the cloud for scalability when you need to

Worker / Queue / Blob Pattern

Let’s say that you have a task like encoding and publishing of video.  You can push the data out to the cloud, where the encoding work happens.  (Raw data places in a “blob” in cloud storage).  You then add an entry to a queue, indicating that there is work to be done, and a separate worker process eventually does the encoding work.

This is a nice pattern for supporting flexible scaling—both the queues and the worker processes could be scaled out separately.

CAP: Pick 2 out of 3

  • Consistency
  • Availability
  • Tolerance to network Partitioning

Eventual Consistency (ACID – BASE)

The idea here is that we are all used to the ACID characteristics listed below.  We need to guarantee that the data is consistent and correct—which means that performance likely will suffer.  As an example, we have a process submit data synchronously because we need to guarantee that the data gets to its destination.

But Gianpaolo talked about the idea of “eventual consistency”.  For most applications, while it’s important for your data to be correct and consistent, it’s not necessarily for it to be consistent right now.  This leads to a model that he referred to as BASE, with the characteristics listed below.

  • ACID
    • Atomicity
    • Consistency
    • Isolation
    • Durability
  • BASE
    • Basically Available
    • Soft state
    • Eventually consistent

Fundamental Lesson

Basically the main takeaway is:

  • Put the software components in the place that makes the most sense, given their use

Session – Building Mesh-Enabled Web Applications Using the Live Framework

PDC 2008, Day #3, Session #5, 1 hr 15 mins

Arash Ghanaie-Sichanie

Throughout the conference, I bounced back and forth between Azure and Live Mesh sessions.  I was trying to make sense of the difference between them and understand when you might use one vs. the other.

I understand that Azure Services is the underlying platform that Live Services, and Live Mesh, are built on.  But at the outset, it still wasn’t quite clear what class of applications Live Services were targeted at.  Which apps would want to use Live Services and which would need to drop down and use Azure?

I think that after Arash’s talk, I have a working stab at answering this question.  This is sort of my current understanding, that I’ll update as things become more clear.

Question: When should I use Live Services / Live Mesh ?


  • Create a Live Mesh application (Mesh-enabled) if your customers are currently, or will become, customers.
  • Otherwise, use Azure cloud-based services outside of Live, or the other services built on top of Azure:
    • Azure storage services
    • Sync Framework
    • Service Bus
    • SQL Data Services

Unless I’m missing something, you can’t make use of features of the Live Operating Environment, either as a web application or a local desktop client, unless your application is registered with, and your user has added your application to his account.

The one possible loophole that I can see is that you might just have your app always authorize, for all users, using a central account.  That account could be pre-created and pre-configured.  Your application would then use the same account for all users, enabling some of the synchronization and other options.  But even this approach might not work—it’s possible that any device where local Mesh data is to be stored needs to be registered with that Live account and so your users wouldn’t normally have the authorization to join their devices to your central mesh.

Three Takeaways

Arash listed three main takeaways from this talk:

  • Live Services add value to different types of applications
  • Mesh-enabled web apps extend web sites to the desktop
  • The Live Framework is a standards-based API, available to all types of apps

How It All Works

The basic idea is to start with a “Mesh-enabled” web site.  This is a web site that delivers content from the user’s Mesh, including things like contacts and files.  Additionally, the web application could store all of its data in the Mesh, rather than on the web server where it is hosted.

Once a web application is Mesh-enabled, you have the ability to run it on various devices.  You basically create a local client, targeted at a particular platform, and have it work with data through the Live Framework API.  It typically ends up working with a cached local copy of the data, and the data is automatically synchronized to the cloud and then to other devices that are running the same application.

This basically implements the Mesh vision that Ray Ozzie presented first at Mix 2008 in Las Vegas and then again at PDC 2008 in Los Angeles.  The idea is that we move a user’s data into the cloud and then the data follows them around, no matter what device they’re currently working on.  The Mesh knows about a user’s:

  • Devices
  • Data, including special data like Contacts
  • Applications
  • Social graph  (friends)

The User’s Perspective

The user must do a few things to get your application or web site pointing at his data.  As a developer, you send him a link that lets him go sign up for a account and then register your application in his Mesh.  Registration, from the user’s perspective, is a way for him to authorize your application to access his Mesh-based data.

Again, it’s required that the user has, or get, a account.  That’s sort of the whole idea—we’re talking about developing applications that run on the Live platform.

Run Anywhere

There is still work to be done, on the part of the developer, to be able to run the application on various devices, but the basic choices are:

  • Locally, on a client PC or Mac
  • In a web browser, anywhere
  • From the Live Desktop itself, which is hosted in a browser

(In the future, with Silverlight adding support for running Silverlight-sandboxed apps outside of the browser, we can imagine that as a fourth option for Mesh-enabled applications).

The Developer’s Perspective

In addition to building the mesh application, the developer must also register his application, using the Azure Services Portal.  Under the covers, the application is just an Azure-based service.  And so you can leverage the Azure standard goodies, like running your service in a test mode and then deploying/publishing it.

One other feature that is made available to Mesh application authors is the ability to view basic Analytics data for your application.  Because the underlying Mesh service is aware of your application, wherever it is running, it can collect data about usage.    The data is “anonymized”, so you can’t see data about individual users, but can view general metrics.

Arash talked in some detail about the underlying security model, showing how a token is granted to the user/application.


Mesh obviously seems a good fit for some applications.  You can basically run on a platform that gives you a lot of services, as well as access to useful data that may already exist in a particular user’s Mesh environment.  There is also some potential for cross-application sharing of data, once common data models are agreed upon.

But the choice to develop on the Mesh platform implies a decision to sign your users up as part of the Mesh ecosystem.  While the programming APIs are entirely open, using basic HTTP/REST protocols, the platform itself is owned/hosted/run exclusively by Microsoft.

Not all of your users will want to go through the hassle of setting up a Live account in order to use your application.  What makes it a little worse is that the process for them is far from seamless.  It would be easier if you could hide the Live branding and automate some of the configuration.  But the user must actually log into and authorize your application.  This is a pretty high barrier, in terms of usability, for some users.

This also means that your app is betting on the success of the platform itself.  If the platform doesn’t become widely adopted, few users will live (pardon the pun) in that environment.  And the value of hosting your application in that ecosystem becomes less clear.

It remains to be seen where Live as a platform will go.  The tools and the programming models are rich and compelling.  But whether the platform will live up to Ozzie’s vision is still unclear.

Session – Offline-Enabled Data Services

PDC 2008, Day #3, Session #4, 1 hr 15 mins

Pablo Castro

I attended a second session with Pablo Castro (the previous one was the session on Azure Tables).  This session focused on a future capability in ADO.NET Data Services that would allow taking data services “offline” and then occasionally synching them with the online data.

Background – Astoria

ADO.NET Data Services was recently released as part of the .NET Framework 3.5 SP1 release.  It was formerly known as project “Astoria”.

The idea of ADO.NET Data Services is to allow creating a data service that lives on the web.  Your data source can be anything—a SQL Server database, third-party database, or some local data store.  You wrap your data source using the Entity Data Model (EDM) and ADO.NET Data Services Runtime.  Your data is now available over the web using standard HTTP protocols.

Once you have a data service that is exposed on the web, you can access it from any client.  Because the service is exposed using HTTP/REST protocols, you can access your data using simple URIs.  By using URIs, you are able to create/read/update/delete your data (POST/GET/PUT/DELETE in REST terms).

Because we can access the data using HTTP, our client is not limited to a .NET application, but can be any software that knows about the web and HTTP.  So we can consume our data from Javascript, Ruby, or whatever.  And the client could be a web-based application or a thick client somewhere that has access to the web.

If your client is a .NET client, you can use the ADO.NET Data Services classes in the .NET Framework to access your data, rather than having to build up the underlying URIs.  You can also use LINQ.

So that’s the basic idea of using ADO.NET Data Services and EDM to create a data service.  For more info, see:

The Data Synchronization Landscape

Many of the technologies that made an appearance at PDC 2008 make heavy use of data synchronization.  Data synchronization is available to Azure cloud services or to Live Mesh applications.

The underlying engine that handles data synchronization is the Microsoft Sync Framework.  The Sync Framework is a synchronization platform that allows creating sync providers, sitting on top of data that needs to be synchronized, as well as sync consumers—clients that consume that data.

The basic idea with sync is that you have multiple copies of your data in different physical locations and local clients that make use of that data.  Your client would work with its own local copy of the data and then the Sync Framework would ensure that the data is synched up with all of the other copies of the data.

What’s New

This session talked about an effort to add support in ADO.NET Data Services for offline copies of your Astoria-served data, using the Sync Framework to do data synchronization.

Here are the basic pieces (I’m too lazy to draw a picture).  This is just one possible scenario, where you want to have an application that runs locally and makes use of a locally cached copy of your data, which exists in a database somewhere:

  • Data mainly “lives” in a SQL Server database.  Assume that the database itself is not exposed to the web
  • You’ve created a data service using ADO.NET Data Services and EDM that exposes your SQL Server Data to the web using a basic REST-based protocol.  You can now do standard Create/Read/Update/Delete operations through this interface
  • You might have a web application running somewhere that consumes this data.  E.g. A Web 2.0 site built using Silverlight 2, that allows viewing/modifying the data.  Note that the web server does not have a copy of your data, but goes directly to the data service to read/write its data.
  • Now you create a thick client that also wants to read/write your data.  E.g. A WPF application.  To start with, you assume that you have a live internet connection and you configure the application to read/write data directly from/to your data service

At this point, you have something that you could build today, with the tools in the .NET Framework 3.5 SP1.  You have your data out “in the cloud” and you’ve provided both rich and thin clients that can access the data.

Note: If you were smart, you would have reused lots of code between the thin (Silverlight 2) and thick (WPF) clients.  Doing this gives your users the most consistent GUI between online and offline versions.

Now comes the new stuff.  Let’s say that you have cases when you want your thick WPC client to be able to work even though the Internet connection is not present.  Reasons for doing this include:

  • You’re working on a laptop, somewhere where you don’t have an Internet connection (e.g. airplane)
  • You want the application to be more reliable—i.e. app is still usable even if the connection disappears from time to time
  • You’d like the application to be slightly better performing.  As it stands, the performance depends on network bandwidth.  (The “lunchtime slowdown” phenomenon).

Enter Astoria Offline.  This is the set of extensions to Astoria that Pablo described, which is currently not available, but planned to be in Alpha by the end of the year.

With Astoria Offline, the idea is that you get a local cache of your data on the client PC where you’re running your thick client.  Then what happens is the following:

  • Your thick (WPF) application works directly with the offline copy of the data
  • Performance is improved
  • Much more reliable—the data is always there
  • You initiate synchronization (or set it up from time to time) to synch data back to the online copy

This synchronization is accomplished using the new Astoria Offline components.  When you do synchronize, the synchronization is two-ways, which means that you update both copies with any changes that have occurred since you last synched:

  • All data created locally is copied up to the online store
  • Data created online is copied down
  • Changes are reconciled—two-way
  • Deletions are reconciled—two-way

Pablo did a basic demo of this scenario and it worked just as advertised.  He showed that the client worked with the local copy of the data and that everything synched properly.  He also showed off some early tooling in Visual Studio that will automate much of the configuration that is required for all of this to work.

Interestingly, it looked like in Pablo’s example, the local copy of the data was stored in SQL Server Express.  This was a good match, because the “in the cloud” data was stored in SQL Server.

How Did They Do It?

Jump back to the description of the Microsoft Sync Framework.  Astoria Offline is using the sync framework to do all of the underlying synchronization.  They’ve written a sync provider that knows about the entity data model and interfaces between EDM and the sync framework.

Extensibility Points

I’m a little fuzzier on this area, but I think I have a general sense of what can be done.

Note that the Sync Framework itself is extensible—you can write your own sync providers, providing synchronized access to any data store that you care to support.  Once you do this, you get 2-way (or more) synchronization between your islands of custom data.

But if I understood Pablo correctly, it sounds like you could do this a bit differently with Astoria Offline in place.  It seems like you could pump your custom data from the Entity Framework, by building a custom data source so that the EDM can see your data.  (EntityFrameworkSyncProvider fits in here somewhere).  I’m guessing that once you serve up your data in a relational manner to the EDM, you can then synch it using the Astoria Offline mechanisms.  Fantastic stuff!

Going Beyond Two Nodes

One could imagine going beyond just an online data source and an offline copy.  You could easily imagine topologies that had many different copies of the data, in various places, all being synched up from time to time.

Other Stuff

Pablo talked about some of the other issues that you need to think about.  Conflict detection and resolution is a big one.  What if two clients both update the same piece of data at the same time?  Classic synchronization issue.

The basic things to know about conflicts, in Astoria Offline, are:

  • Sync Framework provides a rich model for detecting/resolving conflicts, under the covers
  • Astoria Offline framework will detect conflicts
  • The application provides “resolution handlers” to dictate how to resolve the conflict
    • Could be locally—e.g. ask the user what to do
    • Or online—automatic policies

Pablo also talked briefly about the idea of Incremental Synchronization.  The idea is that you might want to synch things a little bit at a time, in a batch-type environment.

There was a lot more stuff here, and a lot to learn.  Much of the concepts just bubble up from the Sync Framework.


Astoria Offline is potentially game-changing.  In my opinion, of the new technologies presented at PDC 2008, Astoria Offline is the one most likely to change the landscape.  In the past, vendors have generally had to pick between working with live data or local data.  Now they can do both.

In the past, the online vs. offline data choice was driven by whether you needed to share the data across multiple users.  So the only apps that went with offline data were the ones that didn’t need to share their data.  What’s interesting about Astoria Offline is that these apps/scenarios  can now use this solution to leave their data essentially local, but make the data more mobile, across devices.  Imagine an application that just stores local data that only it consumes.  But now if you want to run that app on multiple machines, you have to manually copy the data—or move it to a share seen by both devices.  With Astoria Offline, you can set up a sync to an online location that each device synchs to, as needed.  So you can just move from device to device and your data will just follow you.  So you can imagine that this makes it much easier to move apps out to mobile devices.

This vision is very similar to what Live Mesh and Live Services promise.  But the difference is that here you don’t need to subscribe to your app and its data living in the MS Live space.  Your data can be in whatever format you like, and nobody needs to sign up with MS Live.

When Can I Get It?

Pablo mentioned a basic timeline:

  • Early Alpha by the end of the year
  • CTPs later, i.e. next year


In addition to the links I listed above, you might also check out:

Session – Windows Azure Tables: Programming Cloud Table Storage

PDC 2008, Day #3, Session #3, 1 hr 15 mins

Pablo Castro, Niranjan Nilakantan

Pablo and Niranjan did a session that went into some more detail on how the Azure Table objects can be used store data in the cloud.


This talk dealt with the “Scalable Storage” part of the new Azure Services platform.  Scalable Storage is a mechanism by which applications can store data “in the cloud” in a highly scalable manner.

Data Types

There are three fundamental data types available to applications using Azure Storage Services:

  • Blobs
  • Tables
  • Queues

This session focused mainly on Tables.  Specifically, Niranjan and Pablo addressed the different ways that an application might access the storage service programmatically.


Tables are a “massively scalable” data type for cloud-based storage.  They are able to store billions of rows, are highly available, and “durable”.  The Azure platform takes care of scaling out the data automatically to multiple servers, if necessary.  (With some hints on the part of the developer).

Programming Model

Azure Storage Services are accessed through the ADO.NET Data Services (Astoria).  Using ADO.NET Data Sercices, there are basically two ways for an application to access the service.

  • .NET API   (System.Data.Services.Client)
  • REST interface   (using HTTP URIs directly)

Data Model

It’s important to note that Azure knows nothing about your data model.  It does not store data in a relational database or access it via a relational model.  Rather, you specify a Table that you’d like to store data in, along with a simple query expression for the data that you’d like to retrieve.

A Table represents a single Entity and is composed of a collection of rows.  Each row is uniquely defined by a Row Key, which the developer specifies.  Additionally, the developer specifies a Partition Key, which is used by Azure in knowing how to split the data across multiple servers.

Beyond the Record Key and Partition Key, the developer can add any other properties that she likes, up to a total of 255 properties.  While the Record and Partition Keys must be string data types, the other properties support other data types.


Azure storage services are meant to be automatically scalable, meaning that the data will be automatically spread across multiple servers, as needed.

In order to know how to split up the data, Azure uses a developer-specified Partition Key, which is one of the properties of each record.  (Think “field” or “column”).

The developer should pick a partition key that makes sense for his application.  It’s important to remember two things:

  • Querying for all data having a single value for a partition key is cheap
  • Querying for data having multiple partition key values is more expensive

For example, if your application often retrieves data by date and shows data typically for a single day, then it would make sense to have a CurrentData property in your data entity and to make that property the Partition Key.

The way to think of this is that each possible unique value for a Partition Key represent a “bucket” that will contain one or more records.  If you pick a key that results in only one record per bucket, that would be inefficient.  But if you pick a key that results in a set of records in the bucket that you are likely to ask for together, this will be efficient.

Accessing the Data Programmatically

Pablo demonstrated creating the classes required to access data stored in an Azure storage service.

He started by creating a class representing the data entity to be stored in a single table.  He selected and defined properties for the Partition and Record key, as well as other properties to store any other desired data in.

Pablo also recommended that you create a single class to act as an entry point into the system.  This class then acts as a service entry point for all of the data operations that your client application would like to perform.

He also demonstrated using LINQ to run queries against the Azure storage service.  LINQ automatically created to corresponding URI to retrieve, create, update, or delete the data.


Pablo and Niranjan also touched on a few other issues that most applications will deal with:

  • Dealing with concurrent updates  (uses Etag and if-match)
  • Pagination  (using continuation tokens)
  • Using Azure Queues for pseudo-transactional deletion of data


Pablo and Niranjan demonstrated that it was quite straightforward to access Azure storage services from a .NET application.  It’s also the case that non-.NET stacks could make use of the same services using a simple REST protocol.

It was also helpful to see how Pablo used ADO.NET Data Services to construct a service layer on top of the Azure storage services.  This seems to make consuming the data pretty straightforward.

(I still might have this a little confused—it’s possible that Astoria was just being used to wrap Azure services, rather than exposing the data in an Astoria-based service to the client.  I need to look at the examples in a little more detail to figure this out).

Original Materials

You can find the video of the session at:

Session – Silverlight Controls Roadmap

PDC 2008, Day #3, Session #2, 45 mins

Shawn Burke
Product Unit Manager
WPF/Silverlight Controls Team, Microsoft

Shawn Burke gave a short talk on the new controls being released this week in the Silverlight Toolkit, on Codeplex.

He started by outlining the general strategy for releasing controls toolkits like this one:

  • Focus on controls for both WPF & Silverlight
  • Ship out of band from major releases
  • Ship w/source code
  • Fold best (and most popular) controls back into the mainline product

I also appreciated that Shawn outlined the idea of “quality bands”.  Every control goes through a lifecycle, where it passes through various quality bands before eventually making it into the mainline product:

  • Experimental
  • Preview   (team is committed to it)
  • Stable  (equivalent to Beta—feature-complete)
  • Mature   (bugs are fixed)

This is great—because the team doesn’t have to wait for the entire toolkit to reach a particular quality level before releasing it to the public.  Instead, they can assess the quality of each control and then make that known.

WPF Parity Controls

Some of the controls included in the toolkit are “parity” controls—i.e. controls that are already in WPF and now being added to Silverlight.  They include:

  • DockPanel
  • Expander
  • TreeView

DockPanel (Stable)

A DockPanel control allows docking child elements to one side of the container.  The last child can be made to fill the remaining space in the container.

Expander (Preview)

The Expander allows display overview and details information, with the details view showing when the user clicks on an expander widget.  You can also change the direction that the content expands to (the default is Down).  The first image below shows the default state—content not yet expanded.  When the user clicks on the expander icon, the Content appears and the icon changes to show an up arrow.

TreeView (Stable)

Shawn indicated that the TreeView was by far the control mostly frequently asked for by users.  It’s behavior is quite familiar—presenting hierarchical data in a dynamic way and allowing the user to expand/collapse the various nodes.

What’s very cool is that the individual nodes in the tree can be basically anything, and fully styled.

New Non-WPF Controls

Shawn mentioned a series of controls that are brand new—i.e. not yet in WPF either.

  • AutoComplete
  • Charting
  • NumericUpDown

AutoComplete (Preview)

Auto completion is a pretty standard feature on the web.  The idea is to do some data lookup while a user types and then display relevant/possible matches to what they are typing.  The cool thing is that the content displayed can be nearly anything.  Shawn demonstrated having a DataGrid show up to display selected data.

Here’s another example, showing some nice styling:

Charting (Preview)

Shawn ran some pretty amazing charting demos, showing charts on a web page updating in real-time.  He said that Microsoft partnered with Dundas, long-time vendor of charting controls, to gain some expertise in the area.

Here are some samples:


There are some wonderful goodies in this new set of Silverlight tools.  It’s clear that as time goes by, Silverlight will just continue to become more mature and the set of controls will continue to grow—in both WPF and Silverlight.

Original Materials

You can find video of Shawn’s presentation at:

You can play with “live” samples of the various controls at:

Charting samples can be found at:

Session – Live Services: Live Framework Programming Model Architecture and Insights

PDC 2008, Day #3, Session #1, 1 hr 15 mins

Ori Amiga

My next session dug a bit deeper into the Live Framework and some of the architecture related to building a Live Mesh application.

Ori Amiga was presenting, filling in for Dharma Shukla (who just became a new Dad).


It’s still a little unclear what terminology I should be using.  In some areas, Microsoft is switching from “Mesh” to just plain “Live”.  (E.g. Mesh Operating Environment is now Live Operating Environment).  And the framework that you use to build Mesh applications is the Live Framework.  But they are still very much talking about “Mesh”-enabled applications.

I think that the way to look at is this:

  • Azure is the lowest level of cloud infrastructure stuff
  • The Live Operating Environment runs on top of Azure and provides some basic services useful for cloud applications
  • Mesh applications run on top of the LOE and provide access to a user’s “mesh”: devices, applications and data that lives inside their mesh

I think that this basically means that you could have an application that makes use of the various Live Services in the Live Operating Environment without actually being a Mesh application.  On the other hand, some of the services in the LOE don’t make any sense to non-Mesh apps.

Live Operating Environment  (LOE)

Ori reviewed the Live Operating Environment, which is the runtime that Mesh applications run on top of.  Here’s a diagram from Mary Jo Foley’s blog:

This diagram sort of supports my thought that access to a user’s mesh environment is different from the basic stuff provide in the LOE.  According to this particular view, Live Services are services that provide access to the “mesh stuff”, like their contact lists, information about their devices, the data stores (data stored in the mesh or out on the devices), and other applications in that user’s mesh.

The LOE would contain all of the other stuff—basic a set of utility classes, akin to the CLR for desktop-based applications.  (Oh wait, Azure is supposed to be “akin to the CLR”). *smile*

Ori talked about a list of services that live in the LOE, including:

  • Scripting engine
  • Formatters
  • Resource management
  • FSManager
  • Peer-to-peer communications
  • HTTP communications
  • Application engine
  • Apt(?) throttle
  • Authentication/Authorization
  • Notifications
  • Device management

Here’s another view of the architecture (you can also find it here).

Also, for more information on the Live Framework, you can go here.

Data in the Mesh

Ori pointed out an important point about how Mesh applications access their data.  If you have a Mesh client running on your local PC, and you’ve set up its associated data store to synch between the cloud and that device, the application uses local data, rather than pulling data down from the cloud.  Because it’s working entirely with locally cached data, it can run faster than the corresponding web-based version (e.g. running in the Live Desktop).

Resource Scripts

Ori talked a lot about resource scripts and how they might be used by a Mesh-enabled application.  An application can perform actions in the Mesh using these resource scripts, rather than performing actions directly in the code.

The resource scripting language contains things like:

  • Control flow statements – sequence and interleaving, conditionals
  • Web operation statements – to issue HTTP POST/PUT/GET/DELETE
  • Synchronization statements – to initiate data synchronization
  • Data flow constructs – for binding statements to other statements(?)

Ori did a demo that showed off a basic script.  One of the most interesting things was how he combined sequential and interleaved statements.  The idea is that you specify what things you need to do in sequence (like getting a mesh object and then getting its children), and what things you can do in parallel (like getting a collection of separate resources).  The parallelism is automatically taken care of by the runtime.

Custom Data

Ori also talked quite a bit about how an application might view its data.  The easiest thing to do would be to simply invent your own schema and then be the only app that reads/writes the data in that schema.

A more open strategy, however, would be to create a data model that other applications could use.  Ori talked philosophically here, arguing that this openness serves to improve the ecosystem.  If you can come up with a custom data model that might be useful to other applications, they could be written to work with the same data that your application uses.

Ori demonstrated this idea of custom data in Mesh.  Basically you create a serializable class and then mark it up so that it gets stored as user data within a particular DataEntry.  (Remember: Mesh objects | Data feeds | Data entries).

This seems like an attractive idea, but it seems a bit clunky.  The custom data is embedded into the standard AtomPub stream, but not in a queryable way.  It looked more like it was jammed into an XML element in the <DataEntry> element.  This means that your custom data items would not be directly queryable.

Ori did go on to admit that custom data isn’t queryable or indexable, but really only for “lightweight data”.  This is really at odds with the philosophy of a reusable schema for other applications.

Tips & Tricks

Finally, Ori presented a handful of tips & tricks for working with Mesh applications:

  • To clean out local data cache, just delete the DB/MR/Assembler directories and re-synch
  • Local metadata is actually sotred in SQL Server Express.  Go ahead and peek at it, but be careful not to mess it up.
  • Use the Resource Model Browser to really see what’s going on under the covers.  What it shows you represents the truth of what’s happening between the client and the cloud
  • One simple way to track synch progress is to just look at the size of the Assembler and MR directories
  • Collect logs and send to Microsoft when reporting a problem


Ori finished with the following summary:

  • Think of the cloud as just a special kind of device
  • There is a symmetric cloud/client programming model
  • Everything is a Resource

Keynote #4 – Rashid

PDC 2008, Day #3, Keynote #4, 1.5 hrs

Rick Rashid

Rick Rashid, Senior Vice President of Microsoft Research, delivered the final PDC 2008 keynote.

Rick described how Microsoft Research is organized and talked about their mission statement:

  • Expand state of the art in each area in which we do research
  • Rapidly transfer innovative technologies into Microsoft products
  • Ensure that Microsoft products have a future

The Microsoft behemoth is truly impressive.  Here are a couple of tidbits:

  • 10-30% of papers presented at most CSci and Software Engineering academic conferences are  by Microsoft Research
  • Microsoft Research employees about 850 Ph.D. researchers—about as large a staff as most research-oriented universities

Mr. Rashid made a good case for why we do basic research.  It’s not so much for the immediate applications.  Instead, he argued the goals are to enable a company to respond quickly to change, based on an existing reservoir of people and technologies that can be brought to bear on new problems.

I for one was expecting some cool demos at the Research keynote.  There were several demos, but they started out as fairly mundane.

Feng Zhao – Energy Sensing

The first demo was by Feng Zhao, a Principal Researcher.  He talked about a little climate sensor that Microsoft developed and which they been using to gather climate data.

The first example was indoors.  Microsoft had actually hung a large number of these sensors from the ceiling of the keynote auditorium.  They’d then been acquiring basic temperature data for several days and transmitting that data back to a server.  Feng was able to show all kinds of graphs showing the temperature map of the room, including how it warmed up a little when people came in.

Feng also explained how they are using similar sensors in outdoor climactic research projects.  For example, they collect various data about Alpine climate data in Switzerland.

World Wide Telescope

The next demo was definitely a notch above the energy sensing.

The WorldWide Telescope is a Microsoft Research project that went public earlier this year.  It’s a web site that ties together a huge database of space images from all different sources.  The end result is a 3D virtual universe that you can fly around in, navigating to and viewing various objects.  As you zip around in the universe, you automatically see stitched together images of whatever objects would be in your view.

Microsoft announced in this keynote a new version of WorldWide Telescope, being released today.  It included lots of new images, as well as improved views of our solar system.

The demo of the new WorldWide Telescope site was truly awe-inspiring.


The energy level  went up a little bit more as Matt MacLaurin came out to demo Boku—an animated world used to teach kids how to program.

In the Boku world, kids create “programs” visually by selecting objects and then icons indicating what those objects should do.  Actions can include things like moving towards other objects, eating objects, or shooting at objects.

The demo was pretty impressive.  The little Boku world was rendered beautifully in 3D and Matt was able to very quickly create and animate objects.  More important is that he said that kids find the resulting “programming” environment very intuitive to use.  This allows them to learn basic logic and programming skills at a very early age.

Boku was great, but nothing compared to what came next.


Most of us have seen the online videos and demos of the Microsoft Surface.  The basic idea is that a PC projects an image of a user interface surface up onto a flat table that you can interact with by touching.  It’s sort of a combination coffee table and touchable PC screen.

The big thing about Surface is that it supports something called “multi-touch”.  So not only can you move things around on the surface by touch/dragging with one finger, you can initiate more complicated gestures by using two fingers at the same time.  For example, you might  use your thumb and forefinger to zoom into a photo by putting both fingers down and then spreading them apart.

Surface also includes an infrared camera that allows it to “see” things placed on the surface itself.  This allows user interaction with simple objects, or even something like a user’s arm.  (Think about a Poker game that would flip your cards over when it saw you put your arm down to block them from another player’s view).

That’s the basic idea of Surface.  It’s available today, for something like $15,000.

But at today’s keynote, several of the guys from the Surface team demoed the next big extension to Surface, which they called SecondLight.

The basic idea of SecondLight is to extend both the project area and the infrared detection mechanism out into the space above the surface.  So if I held up a piece of tracing paper 8-10” above the surface, I’d see an image on it, as well as the surface.  Ok, no big deal, right?  Well, the big deal is that the image on the tracing paper is different from the image on the surface below it?

The guys’ demo showed how this might work.  Let’s say you’re looking at a Virtual Earth map that showed an aerial view of some location.  The Surface would display the standard aerial view of the place.  Now let’s say that you hold a little sheet of tracing paper above the Surface, over one particular area of the map.  Surface might project out a street view, rather than aerial view onto your paper.  But the main surface of the table will continue to show the original aerial view.  Other applications might be to provide a photo on the Surface and then some description about that photo on the paper that you hold over it.

How the hell do they do that?  Well, they explained that their projector down under the table can actually interlace two completely separate images—one that they project up to the surface and one that they project beyond the surface.  This happens at such a high frequency that you don’t see any flicker and see both images simultaneously.

But wait, there’s more!  Now the guys demonstrated how Surface can also detect objects above the table with its infrared camera.  They took a little picture frame with a plastic see-through surface—something about 4-5” across.  To start with, there was a silhouette of a man on the main Surface.  Now when they held the see-through frame above the Surface, the man moved from the Surface up to the surface of their handheld picture frame.  Truly a “holy shit” moment.  Even more incredible, the presenter started slowly tilting the picture frame from horizontal to vertical.  As he did this, the aspect ratio of the man on the frame stayed the same—as opposed to the projection becoming narrower.  The effect was as if you’d grabbed the silhouette off the flat surface and stood it up.  Unbelievable.

How did they do this last part?  Well, simple—the infrared camera can see the outline of the handheld frame, so it knows its dimensions.  And it then pre-foreshortens the image of the man, so the entire image of the man is mapped to the entire size of the picture frame.  The result is that the man stands up.

This final demo was truly the highlight of the keynote and it made up for all of the boring opening acts.  There are many ways that you can imagine using GUI technology like this.  Just think of a computer that can recognize your face and see where you are.  We’re in for some truly amazing advances in user interaction over the next few years.

You can find a video of the demo at:

Session – Deep Dive: Building an Optimized, Graphics-Intensive Application in Microsoft Silverlight

PDC 2008, Day #2, Session #4, 1 hr 15 mins

Seema Ramchandani

The final session of the day was all about optimizing graphics-based Silverlight applications.  The talk was a little bit different from what I expected.  I was imagining it being about how to do custom 2D graphics.  Instead, it was more geared at understanding the underlying rendering loop and optimizing basic animations.

Seema is a Program Manager in the Silverlight group and described her job as being focused on performance.  She is the person, she explained, that people call when they can’t figure out why their Silverlight application is running so slowly.

The Dancing Peacock

Seema’s first real-world story gives us a new term that we can use in assessing application performance:

The Dancing Peacock = the portions of your application that are consuming resources, but not contributing to the user experience in any meaningful way.

The story goes something like this.  Someone called up Seema and said that they had a very poorly performing Silverlight application and they could not figure out why it was so slow.  Seema took a look at it and started removing elements to understand which element was contributing to the performance degradation.

What she found, under all of the visible controls, was a giant full-screen animated dancing peacock.  It was being rendered, because it was behind all of the other windows, but the designer had left it in the XAML code, figuring that it wasn’t doing any harm.  But as it turns out, code to calculate all of the peacock’s dance steps was still running in the background—and dragging the entire application down.

So Seema’s basic message throughout the talk was—look for the dancing peacocks in your application and remove them.

The Graphics Pipeline

Seema argued that it was important to fully understand how the graphics pipeline in Silverlight works.  If you understand the full sequence of what happens to render graphics to the screen, it can greatly help you in debugging the source of any performance problems.

She showed a fairly detailed diagram of the rendering loop and walked through all of the steps, explaining what happens at each point.

Tips / Tricks

Seema also presented miscellaneous tips and tricks for improving performance.  Without going into details, some of the basic ideas were:

  • Blend in as small a region as possible
  • Mitigate blurry text with UsesLayoutRounding
  • Avoid large-scale animations  (costly)
  • Don’t plug up your UI thread with costly operations
  • Avoid video resizing by encoding at the desired resolution
  • Simplify XAML – avoid bloat


Seema also demonstrated a very useful tool that you can use for profiling Silverlight applications.  She described using a tool called XPerf, which using Event Tracing for Windows to exactly measure the amount of time spent in each module of the underlying native code.

XPerf can be used for debugging, but is most powerful as a way of comparing alternative designs, to see how they impact performance.

Session – Microsoft Silverlight Futures: Building Business Focused Applications

PDC 2008, Day #2, Session #3, 1 hr 15 mins

Jamie Cool

In the Silverlight Futures session, Jamie Cool focused on building business applications using Silverlight.  (From now on, when we say “Silverlight”, we really mean “Silverlight 2”).

Jamie started by summarizing the whole thick/thin client thing and reiterating where Silverlight fits in.

  • WPF apps run natively on your PC
    • Richer UI experience
    • Better performance
    • Access to everything on your PC, e.g. file system
  • ASP.NET web apps run in the browser
    • Broadest reach, running on any platform, any browser
    • Much more limited UI
    • Runs in a sandbox, limited access to local resources
  • Silverlight is somewhere in the middle
    • Runs in your browser
    • Much better user experience than ASP.NET
    • Fewer trips back to server, things running on client
    • Requires client to first download Silverlight runtime

People who write business applications hear requests all the time for accessing their apps through a web browser.  In the past, this has meant a radically different and more limited GUI in the form of ASP.NET.  But with Silverlight, you can really have it all—deliver both rich WPF applications and browser-based applications using Silverlight.  Because Silverlight uses a subset of the .NET Framework, you can even use the same codebase.  (If you’re careful).

Support for Business Apps Using Today’s Platforms

Much of Jamie’s talk focused on a future extension to Silverlight, code-named Alexandria, that would provide a richer framework for allowing business apps hosted in Silverlight to do the kinds of things that they typically need doing.

According to Jamie, business applications have the following characteristics:

  • They are mainly focused on working with data and applying business logic to data
  • They need
    • A way to move data between tiers
    • Methods for “shaping” data
    • Methods for soundly applying business logic to the data
    • A way to bind data to the user interface

In terms of moving data between tiers, Jamie mentioned ADO.NET Data Services as really being the best tool for exposing data stored in a database to a web-based application.  He mentioned the same speaking points as the guys in the Astoria talks—the ability to expose/consume data using simple a REST protocol.

Jamie also demoed how the Entity Data Model could be used a data layer between the database (or data service) and the application.  He also demoed some of the basics of binding Silverlight controls to the data served up through ADO.NET Data Services.

Futures – A Business Logic Framework

The second half of Jamie’s talk focused on a future extension to Silverlight, code-named Alexandria, that would provide a richer framework for allowing business apps hosted in Silverlight to do the kinds of things that business apps typically need to do.

(Jamie never mentioned “Alexandria”, but he was making use of a namespace that had that name in all of his demos).

Jamie did some basic demos of how Alexandria works.  To start with, you create a class to represent your business object, then set up its data binding and use attributes to specify validation rules.

Hold on just a second!  Alexandria suddenly sounds exactly like Rocky Lhotka’s CSLA.NET Framework—especially since Rocky is just finishing up a version of CSLA for Silverlight.  With an entire team of developers, Microsoft has managed to develop a framework that Rocky created (mostly) all by himself.

I’m being a little facetious.  I’m really curious what Rocky’s take is on Alexandria, since it seems to fulfill the same basic mission as CSLA.NET.  I don’t know about many of the details of CSLA, but a couple of things occurred to me:

  • CSLA.NET also supports Win Forms and Web Forms, whereas Alexandria appears to be targeted exclusively at Silverlight
  • CSLA.NET supports a wide variety of transport protocols (anything in WCF, I think), where Alexandria seems to be REST-only
  • CSLA.NET may not directly support binding to an ADO.NET Data Service, or maybe not as seamlessly as Alexandria
  • CSLA.NET is open source, so benefit from many improvements that come from the community

Anyway, as I said I’m not very knowledgeable about CSLA, but it definitely seemed like Alexandria is targeting the exact same set of business requirements as CSLA.  As with any framework comparison, however, each is certain to have its own strengths and weaknesses and appropriate for a particular category of applications.

Rocky, what is your take on this?  Ditto current users of CSLA.

Session – Developing Applications Using Data Services

PDC 2008, Day #2, Session #2, 1 hr 15 mins

Mike Flasko

Ok, I’m finally starting to understand the very basics of Astoria (now branded ADO.NET Data Services), as well as how it relates to the Entity Data Model and SQL Data Services.

There have been a lot of new data-related services and features coming out of Microsoft lately.  Astoria, SQL Data Services, the Entity Data Model, dynamic data in ASP.NET, etc.  Without understanding what each of them does, it’s hard to see how they all fit together, or when you’d use the different services.

Mike Flasko started this session by talking at the “data services landscape”.  (He had a nice picture that I’d include here if I had a copy).  There are a few points to be made about the various data services:

  • A wide spectrum of data services available
  • Some you create yourself, some 3rd party
  • Some on-premises, some hosted/cloud
  • Characteristics of Microsoft-supplied services
    • Simple REST interface
    • Uniform interface  (URIs, HTTP, AtomPub) — i.e. underlying data is different, but you access it in the same way, no matter the service used

(By the way, the new buzzword at PDC 2008 is “on premises” data and services—to be contrasted with data and services that reside “in the cloud”).

ADO.NET Data Services fits in as follows—it’s a protocol for serving up data from different underlying sources using feeds and a REST-ful interface.

The basic idea behind REST and feeds is that this is a simple HTTP-based protocol—which means that any tool running on any platform or development stack can make use of it.  For example, because we use a simple HTTP GET request to read data, any browser or software that knows how to access the web can read our data.

REST is basically a way to piggyback classic CRUD data operations on top of existing HTTP verbs.  Here’s how they map:

  • HTTP POST – Create a data item(s?)
  • HTTP GET – Read one or more data items
  • HTTP PUT – Update a data item(s?)
  • HTTP DELETE – Delete a data item(s?)

As an example of how you might use ADO.NET Data Services to provide a uniform API to multiple data sources, Mike talked about an application reading/writing data from several different locations:

  • From an on-premises SQL Server database
  • From SQL Data Services

In both cases, you access the data through the ADO.NET Data Services framework, instead of going to SQL Server or the SQL Data service directly.  You can easily add an ADO.NET Data Services layer that sits on top of an on-premises SQL Server databases.  And Mike said that SQL Data Services will support the ADO.NET Data Services conventions.

Mike did a nice demo showing exactly how you might consume on-premises SQL Server data through an ADO.NET Data Services API.  He started by creating a data model using the Entity Data Model framework.  The Entity Data Model is basically just an object/relational mapper that allows you to build a logical data model on top of your physical database.  Once you’ve done this, you just create a service that wires up to the Entity Data Model and exposes its data through ADO.NET Data Services (i.e. a REST-ful interface).

Mike then walked through the actual code for accessing the service that he created.  At this point, you can do everything (Create/Read/Update/Delete) using HTTP POST/GET/PUT/DELETE and simple ASCII URIs.  For example, you can read one or more data elements by just entering the correct URI in your browser.  (Because the browser does an HTTP GET by default).

But… This does not mean that your application has to traffic exclusively in an HTTP-based API.  It would be ugly to have your .NET managed code doing all of its data access by building up URI strings.  Instead, there is a standard .NET namespace built on top of the REST-based interface that you’d use if you were writing managed code.

This layering of APIs, REST and .NET, might seem crazy.  But the point is that everything that goes across the wire is REST.  Microsoft has done this because it’s an open standard and opens up their services to 3rd parties.  It also easily bypasses problems with firewalls.  Finally, they provide an object model on top of the URI goo so that .NET applications can work with managed objects and avoid all of the URI stuff.

Miscellaneous Stuff

Mike also showed how a Silverlight 2 application might make use of ADO.NET Data Services to get its data.  Again, using ADO.NET Data Services gives us a lot of flexibility in how we architect the solution.  In Mike’s case, he had the middle tier (app server) store some data in a local SQL Server and some data up in the cloud using SQL Data Services.

Mike did an excellent job at showing how ADO.NET Data Services fits into the data services landscape at Microsoft.  I’m also glad that he brought the Entity Data Model into his examples, so that we could see where it fits in.

Now I’m just wondering if Mike and his team curse the marketing department every time they have to type “ADO.NET Data Services” rather than “Astoria”.

Session – Live Services: What I Learned Building My First Mesh Application

PDC 2008, Day #2, Session #1, 45 mins

Don Gillett

In my next session, Don Gillett explained how he wrote one of the first Live Mesh applications.  Jot is a little note-taking application that allows synchronizing simple lists across instances running on a PC, a phone, or as a “Mesh application” running in your web browser.

Mesh is all about data synchronization and Jot demonstrates how this works.  To start with, the following preconditions must all be set up:

  • User has created a Live Mesh account  (this is free and comes with a free/default amount of storage)
  • Jot has been installed to the PC and the phone
  • Jot has been installed in the Mesh desktop as a “Mesh application”

At this point, Jot will be able to take advantage of the Mesh platform to automatically synchronize its lists with all three endpoints.

Here’s one possible scenario:

  • John is at home, fires up Jot on his PC and adds “toilet paper” to the “Groceries” list in Jot
  • Data is automatically synchronized to the other devices in the Mesh—Jot running on the Mesh desktop and John’s wife’s phone
  • John’s wife stops at the grocery store on the way home, fires up Jot on her phone and gets the current grocery list, which now includes toilet paper

Jot is a very simple application, but it demonstrates the basics of how the Mesh platform works.  It’s primary goal is to synchronize data across multiple devices.

Out of the box, you can synchronize data without creating or running Mesh applications.  You just create a data folder up on the Mesh desktop and then set it up to synchronize on the desired devices.  Changes made to files in the corresponding folders on any of the devices will be automatically pushed out to the other devices, as well as to the folder on the Mesh desktop.

In this out-of-the-box data synch scenario, because the data lives in the Mesh desktop, this supports two main things:

  • Data is always being backed up “to the cloud” because it’s stored in the Mesh
  • You can access the data from any public web access point by simply going to the Mesh web site

Writing Your Own Mesh-Enabled Application

Applications written to support Mesh can take advantage of this cross-device synchronization, but only for users who have signed up for a Live Mesh account.

Because this technology is useful only for users who are using Mesh, it remains to be seen how widespread it will be.  If Live Services and Live Mesh aren’t widely adopted, the ecosystem for Mesh applications will be equally limited.

But if do you write an application targeted at Mesh users, the API for subscribing to and publishing Mesh data is very easy to use.  It is simply another .NET namespace that you can get at.  And if you’re writing non-.NET applications, all of the same functionality is available through HTTP, using a REST interface.

What Types of Data Can You Synchronize?

It’s important to remember the Mesh applications aren’t just limited to accessing shared files in shared folders.  Every Mesh application can basically be thought of as a custom data provider and data consumer.  This means two things:

  • It can serve data up to the mesh in any format desired (not just files)
  • That format is understood by the same app running on other devices

As an example, your Mesh application might serve up metadata about files, the files themselves, and combine it with other data local to that device.

The Data Model

Don talked about the underlying data model that your application uses when publishing or consuming data.  It looks something like this:

Mesh Objects

Data Feeds

Data Entries — optionally pointing to Enclosures

Mesh Objects are the top-level shareable units that your Mesh application traffics in.  They can contain one or more Data Feeds.  If they are too large, synchronization will be too costly, because they will change too often.  If they are too small, or fine-grained, users may get shared data that is incomplete.  What’s important is that a Mesh application gets to decide what data units it wants to serve up to the mesh.

Data Feeds represent a collection of data.

Data Entries are the individual chunks of data within a Data Feed.  Each is uniquely identified.  These are the smallest logical storage units and the chunks that will be synchronized using the underlying synchronization technology.

Enclosures are used when your data grows too large to store directly in an text-based feed.  They are owned by and pointed to by an associated Data Entry.  Media files are an example of data that would be stored in an Enclosure, rather in the Data Entry itself.

The Evolution of a Meshified Application

Don presented a nice plan for how you might start with a simple application and then work towards making it a fully Mesh-aware application:

  1. Persist (read/write) data to a local file
  2. Read/write data from/to a local DataFeed object
  3. Utilize FeedSync to read/write your data from/to the Mesh “cloud”

Don then walked through an example, building a tiny application that would synch up a list of simple “checklist” objects, containing just a string and an IsChecked Boolean.

Final Thoughts

Don mentioned additional Live Services APIs that your application can take advantage of, in addition to data synchronization:

  • Live ID
  • Authentication
  • Contact lists
  • News Feeds  (who is changing what in Mesh)

He also mentioned a couple of tools that are very useful when writing/debugging Mesh applications:

  • Resource Model Browser  (available in the Live Framework SDK)
  • Fiddler 2, third-party web traffic logger, from

Keynote #3 – Box, Anderson

PDC 2008, Day #2, Keynote #3, 1.5 hrs

Don Box, Chris Anderson

Don and Chris’ presentation was notable in that they did not use a single Powerpoint slide.  The entire presentation consisted of Don and Chris writing code in Visual Studio.

This will be a short post because Don and Chris went so fast and covered so much stuff that I didn’t make any attempt to take notes on specific techniques.

Their basic demo was to build a simple Azure (cloud-based) service that served up a list of processes running on the server.  They started by writing this service as a “classic” local WCF service, then publishing it on the ServiceBus, to make it available on the cloud.  And finally, they deployed the service into the cloud so that it was being hosted by the Azure servers.

The most important takeaway from this talk was that the service that they wrote was at every point accessible through standards-based protocols.  Azure-based services are accessible through a simple “REST-ful” interface.  The idea of REST is to use the existing GET/PUT/POST/DELETE verbs in HTTP to execute corresponding Read/Update/Create/Delete data operations on the server side.

This session was the hardest to keep up with, mentally, but also the most fun.  Don and Chris traded off talking and coding.  Both were equally quick at talking through explanations of what they’d do next, in moving from a local service to a service published up in the Azure cloud.  And both were equally quick to write the corresponding code in Visual Studio.  They probably used fewer code snippets than any of the other presenters so far at PDC.

All in all, an excellent session, though it did make my head spin.

Keynote #2 – Ozzie, Sinofsky, Guthrie, Treadwell

PDC 2008, Day #2, Keynote #2, 2 hrs

Ray Ozzie, Steven Sinofsky, Scott Guthrie, David Treadwell

Wow.  In contrast to yesterday’s keynote, where Windows Azure was launched, today’s keynote was the kind of edge-of-your-seat collection of product announcements that explain why people shell out $1,000+ to come to PDC.  The keynote was a 2-hr extravaganza of non-stop announcements and demos.

In short, we got a good dose of Windows 7, as well as new tools in .NET 3.5 SP1, Visual Studio 2008 SP1 and the future release of Visual Studio 2010.  Oh yeah—and an intro to Office 14, with online web-based versions of all of your favorite Office apps.

Not to mention a new Paint applet with a ribbon interface.  Really.

Ray Ozzie Opening

The keynote started once again today with Ray Ozzie, reminding us of what was announced yesterday—the Azure Services Platform.

Ray pointed out that while yesterday focused on the back-end, today’s keynote would focus on the front-end: new features and technologies from a user’s perspective.

He pointed out that the desktop-based PC and the internet are still two completely separate world.  The PC is where we sit when running high-performance close-to-the-metal applications.  And the web is how we access the rest of the world, finding and accessing other people and information.

Ray also talked about the phone being the third main device where people spend their time.  It’s always with us, so can respond to our spontaneous need for information.

The goal for Microsoft, of course, is that applications try to span all three of these devices—the desktop PC, the web, and the phone.  The apps that can do this, says Ozzie, will deliver the greatest value.

It’s no surprise either that Ray mentioned Microsoft development tools as providing the best platform for developing these apps that will span the desktop/web/phone silos.

Finally, Ray positioned Windows 7 as being the best platform for users, since we straddle these three worlds.

Steven Sinofsky

Next up was Steven Sinofsky,Senior VP for Windows and Windows Live Engineering Group at Microsoft. Steven’s part of the keynote was to introduce Windows 7.  Here are a couple of tidbits:

  • Windows 7 now in pre-beta
  • Today’s pre-beta represents “M3”—a feature-complete milestone on the way to Beta and eventual RTM.  (The progression is M1/M2/M3/M4/Beta)
  • The beta will come out early in 2009
  • Release still targeted at 3-yrs after the RTM of Vista, putting it at November of 2009
  • Server 2008 R2 is also in pre-beta, sharing its kernel with Windows 7

Steven mentioned three warm-fuzzies that Windows 7 would focus on:

  • Focus on the personal experience
  • Focus on connecting devices
  • Focus on bringing functionality to developers

Julie Larson-Green — Windows 7 Demo

Next up was Julie Larson-Green, Corporate VP, Windows Experience.  She took a spin through Windows 7 and showed off a number of the new features and capabilities.

New taskbar

  • Combines Alt-Tab for task switching, current taskbar, and current quick launch
  • Taskbar includes icons for running apps, as well as non-running (icons to launch apps)
  • Can even switch between IE tabs from the taskbar, or close tabs
  • Can close apps directly from the taskbar
  • Can access app’s MRU lists from the taskbar (recent files)
  • Can drag/dock windows on desktop, so that they quickly take exactly half available real estate

Windows Explorer

  • New Libraries section
    • A library is a virtual folder, providing access to one or more physical folders
    • Improved search within a library, i.e. across a subset of folders

Home networking

  • Automatic networking configuration when you plug a machine in, connecting to new “Homegroup”
  • Automatic configuration of shared resources, like printers
  • Can search across entire Homegroup (don’t need to know what machine a file lives on)


  • New lightweight media player
  • Media center libraries now shared & integrated with Windows Explorer
  • Right-click on media and select device to play on, e.g. home stereo


  • New Device Stage window, summarizing all the operations you can perform with a connected device (e.g. mobile device)
  • Configure the mobile device directly from this view


  • Can now exist on your desktop even without the sidebar being present


  • Can share desktop themes with other users
  • User has full control of what icons appear in system tray
  • New Action Center view is central place for reporting on PC’s performance and health characteristics

Multi-touch capabilities

  • Even apps that are not touch-aware can leverage basic gestures (e.g. scrolling/zooming).  Standard mouse behaviors are automatically mapped to equivalent gestures
  • Internet Explorer has been made touch-aware, for additional functionality:
    • On-screen keyboard
    • Navigate to hyperlink by touching it
    • Back/Forward with flick gesture

Applet updates

  • Wordpad gets Ribbon UI
  • MS Paint gets Ribbon UI
  • New calculator applet with separate Scientific / Programmer / Statistics modes

Sinofsky Redux

Sinofsky returned to touch on a few more points for Windows 7:

  • Connecting to Live Services
  • Vista “lessons learned”
  • How developers will view Windows 7

Steve talked briefly about how Windows 7 will more seamlessly allow users to connect to “Live Essentials”, extending their desktop experience to the cloud.  It’s not completely clear what this means.  He mentioned the user choosing their own non-Microsoft services to connect to.  I’m guessing that this is about some of the Windows 7 UI bits being extensible and able to incorporate data from Microsoft Live services.  Third party services could presumably also provide content to Windows 7, assuming that they implemented whatever APIs are required.

The next segment was a fun one—Vista “lessons learned”.  Steve made a funny reference to all of the feedback that Microsoft has gotten on Vista, including a particular TV commercial.  It was meant as a clever joke, but Steve didn’t get that many laughs—likely because it was just too painfully true.

Here are the main lessons learned with Vista.  (I’ve changed the verb tense slightly, so that we can read this as more of a confession).

  • The ecosystem wasn’t ready for us.
    • Ecosystem required lots of work to get to the point where Vista would run on everything
    • 95% of all PCs running today are indeed able to run Vista
    • Windows 7 is based on the same kernel, so we won’t run into this problem again
  • We didn’t adhere to standards
    • He’s talking about IE7 here
    • IE8 addresses that, with full CSS standards compliance
    • They’ve even released their compliance test results to the public
    • Win 7 ships with IE8, so we’re fully standards-compliant, out of the box
  • We broke application compatibility
    • With UAC, applications were forced to support running as a standard user
    • It was painful
    • We had good intentions and Vista is now more secure
    • But we realize that UAC is still horribly annoying
    • Most software now supports running as a standard user
  • We delivered features, rather than solutions to typical user scenarios
    • E.g. Most typical users have no hope of properly setting up a home network
    • Microsoft failed to deliver the “last mile” of required functionality
    • Much better in Windows 7, with things like automatic network configuration

The read-between-the-lines takeaway is we won’t make these same mistakes with Windows 7.  That’s a clever message.  The truth is that these shortcomings have basically already been addressed in Vista SP1.  So because Windows 7 is essentially just the next minor rev of Vista, it inherits the same solutions.

But there is one shortcoming with Vista that Sinofsky failed to mention—branding.  Vista is still called “Vista” and the damage is already done.  There are users out there who will never upgrade to Vista, no matter what marketing messages we throw at them.  For these users, we have Windows 7—a shiny new brand to slap on top of Vista, which is in fact a stable platform.

This is a completely reasonable tactic.  Vista basically works great—the only remaining problem is the perception of its having not hit the mark.  And Microsoft’s goal is to create the perception that Windows 7 is everything that Vista was not.

Enough ranting.  On to Sinofsky’s list of things that Windows 7 provides for Windows developers:

  • The ribbon UI
    • The new Office ribbon UI element has proved itself in the various Office apps.  So it’s time to offer it up to developers as a standard control
    • The ribbon UI will also gradually migrate to other Windows/Microsoft applications
    • In Windows 7, we now get the ribbon in Wordpad and Paint.  (I’m also suspecting that they are now WPF applications)

  • Jump lists
    • These are new context menus built into the taskbar that applications can hook into
    • E.g. For “most recently used” file lists
  • Libraries
    • Apps can make use of new Libraries concept, loading files from libraries rather than folders
  • Multi-touch, Ink, Speech
    • Apps can leverage new input mechanisms
    • These mechanisms just augment the user experience
    • New/unique hardware allows for some amazing experiences
  • DirectX family
    • API around powerful graphics hardware
    • Windows 7 extends the DirectX APIs

Next, Steven moved on to talk about basic fundamentals that have been improved in Windows 7:


  • Memory — kernel has smaller memory footprint
  • Disk I/O — reduced registry reads and use of indexer
  • Power  — DVD playback cheaper, ditto for timers


  • Speed  — quicker boot time, device-ready time
  • Responsiveness  — worked hard to ensure Start Menu always very responsive
  • Scale  — can scale out to 256 processors

Yes, you read that correctly—256 processors!  Hints of things to come over the next few years on the hardware side.  Imagine how slow your single-threaded app will appear to run when running on a 256-core machine!

Sinofsky at this point ratcheted up and went into a sort of but wait, there’s more mode that would put Ron Popeil to shame.  Here are some other nuggets of goodness in Windows 7:

  • Bitlocker encryption for memory sticks
    • No more worries when you lose these
  • Natively mount/managed Virtual Hard Drives
    • Create VHDs from within Windows
    • Boot from VHDs
  • DPI
    • Easier to set DPI and work with it
    • Easier to manage multiple monitors
  • Accessibility
    • Built-in magnifier with key shortcuts
  • Connecting to an external projector in Alt-Tab fashion
    • Could possibly be the single most important reason for upgrading to Win 7
  • Remote Desktop can now access multiple monitors
  • Can move Taskbar all over the place
  • Can customize the shutdown button  (cheers)
  • Action Center allows turning off annoying messages from various subsystems
  • New slider that allows user to tweak the “annoying-ness” of UAC (more cheers)

As a final note, Sinofsky mentioned that as developers, we had damn well all be developing for 64-bit platforms.  Windows 7 is likely to ship a good percentage of new boxes on x64.  (His language wasn’t this strong, but that was the message).

Scott Guthrie

As wilted as we all were with the flurry of Windows 7 takeaways, we were only about half done.  Scott Guthrie, VP, Developer Division at Microsoft, came on stage to talk about development tools.

He started by pointing out that you can target Windows 7 features from both managed (.NET) and native (Win32) applications.  Even C++/MFC are being updated to support some of the new features in Windows 7.

Scott talked briefly about the .NET Framework 3.5 SP1, which has already released:

  • Streamlined setup experience
  • Improved startup times for managed apps  (up to 40% improvement to cold startup times)
  • Graphics improvements, better performance
  • DirectX interop
  • More controls
  • 3.5 SP1 built into Windows 7

Scott then demoed taking an existing WPF application and adding support for Windows 7 features:

  • He added a ribbon at the top of the app
  • Add JumpList support for MRU lists in the Windows taskbar
  • Added Multi-touch support

Scott announced a new WPF toolkit being released this week that includes:

  • DatePicker, DataGrid, Calendar controls
  • Visual State Manager support (like Silverlight 2)
  • Ribbon control  (CTP for now)

Scott talked about some of the basics coming in .NET 4 (coming sometime in 2009?):

  • Different versions of .NET CLR running SxS in the same process
  • Easier managed/native interop
  • Support for dynamic languages
  • Extensibility Component Model (MEF)

At this point, Scott also starts dabbling in the but wait, there’s more world, as he demoed Visual Studio 2010:

  • Much better design-time support for WPF
  • Visual Studio itself now rewritten in WPF
  • Multi-monitor support
  • More re-factoring support
  • Better support for Test Driven Development workflow
  • Can easily create plugins using MEF

Whew.  Now he got to the truly sexy part—probably the section of the keynote that got the biggest reaction out of the developer crowd.  Scott showed off a little “third party” Visual Studio plug-in that pretty-formatted XML comments (e.g. function headers) as little graphical WPF widgets.  Even better, the function headers, now graphically styled, also contained hot links right into a local bug database.  Big cheers.

Sean’s prediction—this will lead to a new ecosystem for Visual Studio plugins and interfaces to other tools.

Another important takeaway—MEF, the new extensibility framework, isn’t just for Visual Studio.  You can also use MEF to extend your own applications, creating your own framework. Demo of Rich WPF Client Application

Here we got our obligatory partner demo, as a guy from showed off their snazzy application that allowed users to order groceries.  Lots of 2D and 3D graphical effects—one of the more compelling WPF apps that I’ve seen demoed.

Scott Redux

Scott came back out to talk a bit about new and future offerings on the web development side of things.

Here are some of the ASP.NET improvements that were delivered with .NET 3.5 SP1:

  • Dynamic Data
  • REST support
  • MVC (Model-View-Controller framework)
  • AJAX / jQuery  (with jQuery intellisense in Visual Studio 2008)

ASP.NET 4 will include:

  • Web Forms improvements
  • MVC improvements
  • AJAX improvements
  • Richer CSS support
  • Distributed caching

Additionally, Visual Studio 2010 will include better support for web development:

  • Code-focused improvements  (??)
  • Better JavaScript / AJAX tooling
  • Design View CSS2 support
  • Improved publishing and deployment

Scott then switched gears to talk about new and future offerings for Silverlight.

Silverlight 2 was just RTM’d two weeks ago.  Additionally, Scott presented two very interesting statistics:

  • Silverlight 1 is now present on 25% of all Internet-connected machines
  • Silverlight 2 has been downloaded to 100 million machines

IIS will inherit the adaptive (smooth) media streaming that was developed for the NBC Olympics web site.  This is available today.

A new Silverlight toolkit is being released today, including:

  • Charting controls, TreeView, DockPanel, WrapPanel, ViewBox, Expander, NumericUpDown, AutoComplete et al
  • Source code will also be made available

Visual Studio 2010 will ship with a Silverlight 2 designer, based on the existing WPF designer.

We should also expect a major release of Silverlight next year, including things like:

  • H264 media support
  • Running Silverlight applications outside of the browser
  • Richer graphics support
  • Richer data-binding support

Whew.  Take a breath..

David Treadwell – Live Services

While we were all still reeling from Scott Gu’s segment, David Treadweall, Corporate VP, Live Platform Services at Microsoft, came out to talk about Live Services.

The Live Services offerings are basically a set of services that allow applications to interface with the various Windows Live properties.

The key components of Live Services are:

  • Identity – Live ID and federated identity
  • Directory – access to social graph through a Contacts API
  • Communication & Presence – add Live Messenger support directly to your web site
  • Search & Geo-spatial – including mashups on your web sites

The Live Services are all made available via standards-based protocols.  This means that you can invoke them from not only the .NET development world, but also from other development stacks.

David talked a lot about Live Mesh, a key component of Live Services:

  • Allows applications to bridge Users / Devices / Applications
  • Data synchronization is a core concept

Applications access the various Live Services through a new Live Framework:

  • Set of APIs that allow apps to get at Live Services
  • Akin to CLR in desktop environment
  • Live Framework available from PC / Web / Phone applications
  • Open protocol, based on REST, callable from anything

Ori Amiga Demo

Ori Amiga came out to do a quick demonstration of how to “Meshify” an existing application.

The basic idea of Mesh is that it allows applications to synchronize data across all of a user’s devices.  But importantly, this means—for users who have already signed up for Live Mesh.

Live Mesh supports storing the user’s data “in the cloud”, in addition to on the various devices.  But this isn’t required.  Applications could use Mesh merely as a transport mechanism between instances of the app on various devices.

Takeshi Numoto – The Closer

Finally, Takeshi Numoto, GM, Office Client at Microsoft, came out to talk about Office 14.

Office 14 will deliver Office Web Applications—lightweight versions of the various Office applications that run in a browser.  Presumably they can also store all of their data in the cloud.

Takeshi then did a demo that focused a bit more on the collaboration features of Office 14 than on the ability to run apps in the browser.  (Running in the browser just works and the GUI looks just like the rich desktop-based GUI).

Takeshi showed off some pretty impressive features and use cases:

  • Two users editing the same document at the same time, both able to write to it
  • As users change pieces of the document, little graphical widget shows up on the other user’s screen, showing what piece the first user is currently changing.  All updated automatically, in real-time
  • Changes are pushed out immediately to other users who are viewing/editing the same document
  • This works in Word, Excel, and OneNote  (at least these apps were demoed)
  • Can publish data out to data stores in Live Services

Ray’s Wrapup

Ray Ozzie came back out to wrap everything up.  He pointed out that everything we’d seen today was real.  He also pointed out that some of these technologies were more “nascent” than others.  In other words—no complaints if some bits don’t work perfectly.  It seemed an odd note to end on.