Session – Building Mesh-Enabled Web Applications Using the Live Framework

PDC 2008, Day #3, Session #5, 1 hr 15 mins

Arash Ghanaie-Sichanie

Throughout the conference, I bounced back and forth between Azure and Live Mesh sessions.  I was trying to make sense of the difference between them and understand when you might use one vs. the other.

I understand that Azure Services is the underlying platform that Live Services, and Live Mesh, are built on.  But at the outset, it still wasn’t quite clear what class of applications Live Services were targeted at.  Which apps would want to use Live Services and which would need to drop down and use Azure?

I think that after Arash’s talk, I have a working stab at answering this question.  This is sort of my current understanding, that I’ll update as things become more clear.

Question: When should I use Live Services / Live Mesh ?

Answer:

  • Create a Live Mesh application (Mesh-enabled) if your customers are currently, or will become, live.com customers.
  • Otherwise, use Azure cloud-based services outside of Live, or the other services built on top of Azure:
    • Azure storage services
    • Sync Framework
    • Service Bus
    • SQL Data Services

Unless I’m missing something, you can’t make use of features of the Live Operating Environment, either as a web application or a local desktop client, unless your application is registered with Live.com, and your user has added your application to his account.

The one possible loophole that I can see is that you might just have your app always authorize, for all users, using a central account.  That account could be pre-created and pre-configured.  Your application would then use the same account for all users, enabling some of the synchronization and other options.  But even this approach might not work—it’s possible that any device where local Mesh data is to be stored needs to be registered with that Live account and so your users wouldn’t normally have the authorization to join their devices to your central mesh.

Three Takeaways

Arash listed three main takeaways from this talk:

  • Live Services add value to different types of applications
  • Mesh-enabled web apps extend web sites to the desktop
  • The Live Framework is a standards-based API, available to all types of apps

How It All Works

The basic idea is to start with a “Mesh-enabled” web site.  This is a web site that delivers content from the user’s Mesh, including things like contacts and files.  Additionally, the web application could store all of its data in the Mesh, rather than on the web server where it is hosted.

Once a web application is Mesh-enabled, you have the ability to run it on various devices.  You basically create a local client, targeted at a particular platform, and have it work with data through the Live Framework API.  It typically ends up working with a cached local copy of the data, and the data is automatically synchronized to the cloud and then to other devices that are running the same application.

This basically implements the Mesh vision that Ray Ozzie presented first at Mix 2008 in Las Vegas and then again at PDC 2008 in Los Angeles.  The idea is that we move a user’s data into the cloud and then the data follows them around, no matter what device they’re currently working on.  The Mesh knows about a user’s:

  • Devices
  • Data, including special data like Contacts
  • Applications
  • Social graph  (friends)

The User’s Perspective

The user must do a few things to get your application or web site pointing at his data.  As a developer, you send him a link that lets him go sign up for a Live.com account and then register your application in his Mesh.  Registration, from the user’s perspective, is a way for him to authorize your application to access his Mesh-based data.

Again, it’s required that the user has, or get, a Live.com account.  That’s sort of the whole idea—we’re talking about developing applications that run on the Live platform.

Run Anywhere

There is still work to be done, on the part of the developer, to be able to run the application on various devices, but the basic choices are:

  • Locally, on a client PC or Mac
  • In a web browser, anywhere
  • From the Live Desktop itself, which is hosted in a browser

(In the future, with Silverlight adding support for running Silverlight-sandboxed apps outside of the browser, we can imagine that as a fourth option for Mesh-enabled applications).

The Developer’s Perspective

In addition to building the mesh application, the developer must also register his application, using the Azure Services Portal.  Under the covers, the application is just an Azure-based service.  And so you can leverage the Azure standard goodies, like running your service in a test mode and then deploying/publishing it.

One other feature that is made available to Mesh application authors is the ability to view basic Analytics data for your application.  Because the underlying Mesh service is aware of your application, wherever it is running, it can collect data about usage.    The data is “anonymized”, so you can’t see data about individual users, but can view general metrics.

Arash talked in some detail about the underlying security model, showing how a token is granted to the user/application.

Conclusions

Mesh obviously seems a good fit for some applications.  You can basically run on a platform that gives you a lot of services, as well as access to useful data that may already exist in a particular user’s Mesh environment.  There is also some potential for cross-application sharing of data, once common data models are agreed upon.

But the choice to develop on the Mesh platform implies a decision to sign your users up as part of the Mesh ecosystem.  While the programming APIs are entirely open, using basic HTTP/REST protocols, the platform itself is owned/hosted/run exclusively by Microsoft.

Not all of your users will want to go through the hassle of setting up a Live account in order to use your application.  What makes it a little worse is that the process for them is far from seamless.  It would be easier if you could hide the Live branding and automate some of the configuration.  But the user must actually log into Live.com and authorize your application.  This is a pretty high barrier, in terms of usability, for some users.

This also means that your app is betting on the success of the Live.com platform itself.  If the platform doesn’t become widely adopted, few users will live (pardon the pun) in that environment.  And the value of hosting your application in that ecosystem becomes less clear.

It remains to be seen where Live as a platform will go.  The tools and the programming models are rich and compelling.  But whether the platform will live up to Ozzie’s vision is still unclear.

Session – Live Services: Live Framework Programming Model Architecture and Insights

PDC 2008, Day #3, Session #1, 1 hr 15 mins

Ori Amiga

My next session dug a bit deeper into the Live Framework and some of the architecture related to building a Live Mesh application.

Ori Amiga was presenting, filling in for Dharma Shukla (who just became a new Dad).

Terminology

It’s still a little unclear what terminology I should be using.  In some areas, Microsoft is switching from “Mesh” to just plain “Live”.  (E.g. Mesh Operating Environment is now Live Operating Environment).  And the framework that you use to build Mesh applications is the Live Framework.  But they are still very much talking about “Mesh”-enabled applications.

I think that the way to look at is this:

  • Azure is the lowest level of cloud infrastructure stuff
  • The Live Operating Environment runs on top of Azure and provides some basic services useful for cloud applications
  • Mesh applications run on top of the LOE and provide access to a live.com user’s “mesh”: devices, applications and data that lives inside their mesh

I think that this basically means that you could have an application that makes use of the various Live Services in the Live Operating Environment without actually being a Mesh application.  On the other hand, some of the services in the LOE don’t make any sense to non-Mesh apps.

Live Operating Environment  (LOE)

Ori reviewed the Live Operating Environment, which is the runtime that Mesh applications run on top of.  Here’s a diagram from Mary Jo Foley’s blog:

This diagram sort of supports my thought that access to a user’s mesh environment is different from the basic stuff provide in the LOE.  According to this particular view, Live Services are services that provide access to the “mesh stuff”, like their contact lists, information about their devices, the data stores (data stored in the mesh or out on the devices), and other applications in that user’s mesh.

The LOE would contain all of the other stuff—basic a set of utility classes, akin to the CLR for desktop-based applications.  (Oh wait, Azure is supposed to be “akin to the CLR”). *smile*

Ori talked about a list of services that live in the LOE, including:

  • Scripting engine
  • Formatters
  • Resource management
  • FSManager
  • Peer-to-peer communications
  • HTTP communications
  • Application engine
  • Apt(?) throttle
  • Authentication/Authorization
  • Notifications
  • Device management

Here’s another view of the architecture (you can also find it here).

Also, for more information on the Live Framework, you can go here.

Data in the Mesh

Ori pointed out an important point about how Mesh applications access their data.  If you have a Mesh client running on your local PC, and you’ve set up its associated data store to synch between the cloud and that device, the application uses local data, rather than pulling data down from the cloud.  Because it’s working entirely with locally cached data, it can run faster than the corresponding web-based version (e.g. running in the Live Desktop).

Resource Scripts

Ori talked a lot about resource scripts and how they might be used by a Mesh-enabled application.  An application can perform actions in the Mesh using these resource scripts, rather than performing actions directly in the code.

The resource scripting language contains things like:

  • Control flow statements – sequence and interleaving, conditionals
  • Web operation statements – to issue HTTP POST/PUT/GET/DELETE
  • Synchronization statements – to initiate data synchronization
  • Data flow constructs – for binding statements to other statements(?)

Ori did a demo that showed off a basic script.  One of the most interesting things was how he combined sequential and interleaved statements.  The idea is that you specify what things you need to do in sequence (like getting a mesh object and then getting its children), and what things you can do in parallel (like getting a collection of separate resources).  The parallelism is automatically taken care of by the runtime.

Custom Data

Ori also talked quite a bit about how an application might view its data.  The easiest thing to do would be to simply invent your own schema and then be the only app that reads/writes the data in that schema.

A more open strategy, however, would be to create a data model that other applications could use.  Ori talked philosophically here, arguing that this openness serves to improve the ecosystem.  If you can come up with a custom data model that might be useful to other applications, they could be written to work with the same data that your application uses.

Ori demonstrated this idea of custom data in Mesh.  Basically you create a serializable class and then mark it up so that it gets stored as user data within a particular DataEntry.  (Remember: Mesh objects | Data feeds | Data entries).

This seems like an attractive idea, but it seems a bit clunky.  The custom data is embedded into the standard AtomPub stream, but not in a queryable way.  It looked more like it was jammed into an XML element in the <DataEntry> element.  This means that your custom data items would not be directly queryable.

Ori did go on to admit that custom data isn’t queryable or indexable, but really only for “lightweight data”.  This is really at odds with the philosophy of a reusable schema for other applications.

Tips & Tricks

Finally, Ori presented a handful of tips & tricks for working with Mesh applications:

  • To clean out local data cache, just delete the DB/MR/Assembler directories and re-synch
  • Local metadata is actually sotred in SQL Server Express.  Go ahead and peek at it, but be careful not to mess it up.
  • Use the Resource Model Browser to really see what’s going on under the covers.  What it shows you represents the truth of what’s happening between the client and the cloud
  • One simple way to track synch progress is to just look at the size of the Assembler and MR directories
  • Collect logs and send to Microsoft when reporting a problem

Summary

Ori finished with the following summary:

  • Think of the cloud as just a special kind of device
  • There is a symmetric cloud/client programming model
  • Everything is a Resource

Session – Live Services: What I Learned Building My First Mesh Application

PDC 2008, Day #2, Session #1, 45 mins

Don Gillett

In my next session, Don Gillett explained how he wrote one of the first Live Mesh applications.  Jot is a little note-taking application that allows synchronizing simple lists across instances running on a PC, a phone, or as a “Mesh application” running in your web browser.

Mesh is all about data synchronization and Jot demonstrates how this works.  To start with, the following preconditions must all be set up:

  • User has created a Live Mesh account  (this is free and comes with a free/default amount of storage)
  • Jot has been installed to the PC and the phone
  • Jot has been installed in the Mesh desktop as a “Mesh application”

At this point, Jot will be able to take advantage of the Mesh platform to automatically synchronize its lists with all three endpoints.

Here’s one possible scenario:

  • John is at home, fires up Jot on his PC and adds “toilet paper” to the “Groceries” list in Jot
  • Data is automatically synchronized to the other devices in the Mesh—Jot running on the Mesh desktop and John’s wife’s phone
  • John’s wife stops at the grocery store on the way home, fires up Jot on her phone and gets the current grocery list, which now includes toilet paper

Jot is a very simple application, but it demonstrates the basics of how the Mesh platform works.  It’s primary goal is to synchronize data across multiple devices.

Out of the box, you can synchronize data without creating or running Mesh applications.  You just create a data folder up on the Mesh desktop and then set it up to synchronize on the desired devices.  Changes made to files in the corresponding folders on any of the devices will be automatically pushed out to the other devices, as well as to the folder on the Mesh desktop.

In this out-of-the-box data synch scenario, because the data lives in the Mesh desktop, this supports two main things:

  • Data is always being backed up “to the cloud” because it’s stored in the Mesh
  • You can access the data from any public web access point by simply going to the Mesh web site

Writing Your Own Mesh-Enabled Application

Applications written to support Mesh can take advantage of this cross-device synchronization, but only for users who have signed up for a Live Mesh account.

Because this technology is useful only for users who are using Mesh, it remains to be seen how widespread it will be.  If Live Services and Live Mesh aren’t widely adopted, the ecosystem for Mesh applications will be equally limited.

But if do you write an application targeted at Mesh users, the API for subscribing to and publishing Mesh data is very easy to use.  It is simply another .NET namespace that you can get at.  And if you’re writing non-.NET applications, all of the same functionality is available through HTTP, using a REST interface.

What Types of Data Can You Synchronize?

It’s important to remember the Mesh applications aren’t just limited to accessing shared files in shared folders.  Every Mesh application can basically be thought of as a custom data provider and data consumer.  This means two things:

  • It can serve data up to the mesh in any format desired (not just files)
  • That format is understood by the same app running on other devices

As an example, your Mesh application might serve up metadata about files, the files themselves, and combine it with other data local to that device.

The Data Model

Don talked about the underlying data model that your application uses when publishing or consuming data.  It looks something like this:

Mesh Objects

Data Feeds

Data Entries — optionally pointing to Enclosures

Mesh Objects are the top-level shareable units that your Mesh application traffics in.  They can contain one or more Data Feeds.  If they are too large, synchronization will be too costly, because they will change too often.  If they are too small, or fine-grained, users may get shared data that is incomplete.  What’s important is that a Mesh application gets to decide what data units it wants to serve up to the mesh.

Data Feeds represent a collection of data.

Data Entries are the individual chunks of data within a Data Feed.  Each is uniquely identified.  These are the smallest logical storage units and the chunks that will be synchronized using the underlying synchronization technology.

Enclosures are used when your data grows too large to store directly in an text-based feed.  They are owned by and pointed to by an associated Data Entry.  Media files are an example of data that would be stored in an Enclosure, rather in the Data Entry itself.

The Evolution of a Meshified Application

Don presented a nice plan for how you might start with a simple application and then work towards making it a fully Mesh-aware application:

  1. Persist (read/write) data to a local file
  2. Read/write data from/to a local DataFeed object
  3. Utilize FeedSync to read/write your data from/to the Mesh “cloud”

Don then walked through an example, building a tiny application that would synch up a list of simple “checklist” objects, containing just a string and an IsChecked Boolean.

Final Thoughts

Don mentioned additional Live Services APIs that your application can take advantage of, in addition to data synchronization:

  • Live ID
  • Authentication
  • Contact lists
  • News Feeds  (who is changing what in Mesh)

He also mentioned a couple of tools that are very useful when writing/debugging Mesh applications:

  • Resource Model Browser  (available in the Live Framework SDK)
  • Fiddler 2, third-party web traffic logger, from fiddlertool.com

Keynote #2 – Ozzie, Sinofsky, Guthrie, Treadwell

PDC 2008, Day #2, Keynote #2, 2 hrs

Ray Ozzie, Steven Sinofsky, Scott Guthrie, David Treadwell

Wow.  In contrast to yesterday’s keynote, where Windows Azure was launched, today’s keynote was the kind of edge-of-your-seat collection of product announcements that explain why people shell out $1,000+ to come to PDC.  The keynote was a 2-hr extravaganza of non-stop announcements and demos.

In short, we got a good dose of Windows 7, as well as new tools in .NET 3.5 SP1, Visual Studio 2008 SP1 and the future release of Visual Studio 2010.  Oh yeah—and an intro to Office 14, with online web-based versions of all of your favorite Office apps.

Not to mention a new Paint applet with a ribbon interface.  Really.

Ray Ozzie Opening

The keynote started once again today with Ray Ozzie, reminding us of what was announced yesterday—the Azure Services Platform.

Ray pointed out that while yesterday focused on the back-end, today’s keynote would focus on the front-end: new features and technologies from a user’s perspective.

He pointed out that the desktop-based PC and the internet are still two completely separate world.  The PC is where we sit when running high-performance close-to-the-metal applications.  And the web is how we access the rest of the world, finding and accessing other people and information.

Ray also talked about the phone being the third main device where people spend their time.  It’s always with us, so can respond to our spontaneous need for information.

The goal for Microsoft, of course, is that applications try to span all three of these devices—the desktop PC, the web, and the phone.  The apps that can do this, says Ozzie, will deliver the greatest value.

It’s no surprise either that Ray mentioned Microsoft development tools as providing the best platform for developing these apps that will span the desktop/web/phone silos.

Finally, Ray positioned Windows 7 as being the best platform for users, since we straddle these three worlds.

Steven Sinofsky

Next up was Steven Sinofsky,Senior VP for Windows and Windows Live Engineering Group at Microsoft. Steven’s part of the keynote was to introduce Windows 7.  Here are a couple of tidbits:

  • Windows 7 now in pre-beta
  • Today’s pre-beta represents “M3”—a feature-complete milestone on the way to Beta and eventual RTM.  (The progression is M1/M2/M3/M4/Beta)
  • The beta will come out early in 2009
  • Release still targeted at 3-yrs after the RTM of Vista, putting it at November of 2009
  • Server 2008 R2 is also in pre-beta, sharing its kernel with Windows 7

Steven mentioned three warm-fuzzies that Windows 7 would focus on:

  • Focus on the personal experience
  • Focus on connecting devices
  • Focus on bringing functionality to developers

Julie Larson-Green — Windows 7 Demo

Next up was Julie Larson-Green, Corporate VP, Windows Experience.  She took a spin through Windows 7 and showed off a number of the new features and capabilities.

New taskbar

  • Combines Alt-Tab for task switching, current taskbar, and current quick launch
  • Taskbar includes icons for running apps, as well as non-running (icons to launch apps)
  • Can even switch between IE tabs from the taskbar, or close tabs
  • Can close apps directly from the taskbar
  • Can access app’s MRU lists from the taskbar (recent files)
  • Can drag/dock windows on desktop, so that they quickly take exactly half available real estate

Windows Explorer

  • New Libraries section
    • A library is a virtual folder, providing access to one or more physical folders
    • Improved search within a library, i.e. across a subset of folders

Home networking

  • Automatic networking configuration when you plug a machine in, connecting to new “Homegroup”
  • Automatic configuration of shared resources, like printers
  • Can search across entire Homegroup (don’t need to know what machine a file lives on)

Media

  • New lightweight media player
  • Media center libraries now shared & integrated with Windows Explorer
  • Right-click on media and select device to play on, e.g. home stereo

Devices

  • New Device Stage window, summarizing all the operations you can perform with a connected device (e.g. mobile device)
  • Configure the mobile device directly from this view

Gadgets

  • Can now exist on your desktop even without the sidebar being present

Miscellaneous

  • Can share desktop themes with other users
  • User has full control of what icons appear in system tray
  • New Action Center view is central place for reporting on PC’s performance and health characteristics

Multi-touch capabilities

  • Even apps that are not touch-aware can leverage basic gestures (e.g. scrolling/zooming).  Standard mouse behaviors are automatically mapped to equivalent gestures
  • Internet Explorer has been made touch-aware, for additional functionality:
    • On-screen keyboard
    • Navigate to hyperlink by touching it
    • Back/Forward with flick gesture

Applet updates

  • Wordpad gets Ribbon UI
  • MS Paint gets Ribbon UI
  • New calculator applet with separate Scientific / Programmer / Statistics modes

Sinofsky Redux

Sinofsky returned to touch on a few more points for Windows 7:

  • Connecting to Live Services
  • Vista “lessons learned”
  • How developers will view Windows 7

Steve talked briefly about how Windows 7 will more seamlessly allow users to connect to “Live Essentials”, extending their desktop experience to the cloud.  It’s not completely clear what this means.  He mentioned the user choosing their own non-Microsoft services to connect to.  I’m guessing that this is about some of the Windows 7 UI bits being extensible and able to incorporate data from Microsoft Live services.  Third party services could presumably also provide content to Windows 7, assuming that they implemented whatever APIs are required.

The next segment was a fun one—Vista “lessons learned”.  Steve made a funny reference to all of the feedback that Microsoft has gotten on Vista, including a particular TV commercial.  It was meant as a clever joke, but Steve didn’t get that many laughs—likely because it was just too painfully true.

Here are the main lessons learned with Vista.  (I’ve changed the verb tense slightly, so that we can read this as more of a confession).

  • The ecosystem wasn’t ready for us.
    • Ecosystem required lots of work to get to the point where Vista would run on everything
    • 95% of all PCs running today are indeed able to run Vista
    • Windows 7 is based on the same kernel, so we won’t run into this problem again
  • We didn’t adhere to standards
    • He’s talking about IE7 here
    • IE8 addresses that, with full CSS standards compliance
    • They’ve even released their compliance test results to the public
    • Win 7 ships with IE8, so we’re fully standards-compliant, out of the box
  • We broke application compatibility
    • With UAC, applications were forced to support running as a standard user
    • It was painful
    • We had good intentions and Vista is now more secure
    • But we realize that UAC is still horribly annoying
    • Most software now supports running as a standard user
  • We delivered features, rather than solutions to typical user scenarios
    • E.g. Most typical users have no hope of properly setting up a home network
    • Microsoft failed to deliver the “last mile” of required functionality
    • Much better in Windows 7, with things like automatic network configuration

The read-between-the-lines takeaway is we won’t make these same mistakes with Windows 7.  That’s a clever message.  The truth is that these shortcomings have basically already been addressed in Vista SP1.  So because Windows 7 is essentially just the next minor rev of Vista, it inherits the same solutions.

But there is one shortcoming with Vista that Sinofsky failed to mention—branding.  Vista is still called “Vista” and the damage is already done.  There are users out there who will never upgrade to Vista, no matter what marketing messages we throw at them.  For these users, we have Windows 7—a shiny new brand to slap on top of Vista, which is in fact a stable platform.

This is a completely reasonable tactic.  Vista basically works great—the only remaining problem is the perception of its having not hit the mark.  And Microsoft’s goal is to create the perception that Windows 7 is everything that Vista was not.

Enough ranting.  On to Sinofsky’s list of things that Windows 7 provides for Windows developers:

  • The ribbon UI
    • The new Office ribbon UI element has proved itself in the various Office apps.  So it’s time to offer it up to developers as a standard control
    • The ribbon UI will also gradually migrate to other Windows/Microsoft applications
    • In Windows 7, we now get the ribbon in Wordpad and Paint.  (I’m also suspecting that they are now WPF applications)

  • Jump lists
    • These are new context menus built into the taskbar that applications can hook into
    • E.g. For “most recently used” file lists
  • Libraries
    • Apps can make use of new Libraries concept, loading files from libraries rather than folders
  • Multi-touch, Ink, Speech
    • Apps can leverage new input mechanisms
    • These mechanisms just augment the user experience
    • New/unique hardware allows for some amazing experiences
  • DirectX family
    • API around powerful graphics hardware
    • Windows 7 extends the DirectX APIs

Next, Steven moved on to talk about basic fundamentals that have been improved in Windows 7:

Decrease

  • Memory — kernel has smaller memory footprint
  • Disk I/O — reduced registry reads and use of indexer
  • Power  — DVD playback cheaper, ditto for timers

Increase

  • Speed  — quicker boot time, device-ready time
  • Responsiveness  — worked hard to ensure Start Menu always very responsive
  • Scale  — can scale out to 256 processors

Yes, you read that correctly—256 processors!  Hints of things to come over the next few years on the hardware side.  Imagine how slow your single-threaded app will appear to run when running on a 256-core machine!

Sinofsky at this point ratcheted up and went into a sort of but wait, there’s more mode that would put Ron Popeil to shame.  Here are some other nuggets of goodness in Windows 7:

  • Bitlocker encryption for memory sticks
    • No more worries when you lose these
  • Natively mount/managed Virtual Hard Drives
    • Create VHDs from within Windows
    • Boot from VHDs
  • DPI
    • Easier to set DPI and work with it
    • Easier to manage multiple monitors
  • Accessibility
    • Built-in magnifier with key shortcuts
  • Connecting to an external projector in Alt-Tab fashion
    • Could possibly be the single most important reason for upgrading to Win 7
  • Remote Desktop can now access multiple monitors
  • Can move Taskbar all over the place
  • Can customize the shutdown button  (cheers)
  • Action Center allows turning off annoying messages from various subsystems
  • New slider that allows user to tweak the “annoying-ness” of UAC (more cheers)

As a final note, Sinofsky mentioned that as developers, we had damn well all be developing for 64-bit platforms.  Windows 7 is likely to ship a good percentage of new boxes on x64.  (His language wasn’t this strong, but that was the message).

Scott Guthrie

As wilted as we all were with the flurry of Windows 7 takeaways, we were only about half done.  Scott Guthrie, VP, Developer Division at Microsoft, came on stage to talk about development tools.

He started by pointing out that you can target Windows 7 features from both managed (.NET) and native (Win32) applications.  Even C++/MFC are being updated to support some of the new features in Windows 7.

Scott talked briefly about the .NET Framework 3.5 SP1, which has already released:

  • Streamlined setup experience
  • Improved startup times for managed apps  (up to 40% improvement to cold startup times)
  • Graphics improvements, better performance
  • DirectX interop
  • More controls
  • 3.5 SP1 built into Windows 7

Scott then demoed taking an existing WPF application and adding support for Windows 7 features:

  • He added a ribbon at the top of the app
  • Add JumpList support for MRU lists in the Windows taskbar
  • Added Multi-touch support

Scott announced a new WPF toolkit being released this week that includes:

  • DatePicker, DataGrid, Calendar controls
  • Visual State Manager support (like Silverlight 2)
  • Ribbon control  (CTP for now)

Scott talked about some of the basics coming in .NET 4 (coming sometime in 2009?):

  • Different versions of .NET CLR running SxS in the same process
  • Easier managed/native interop
  • Support for dynamic languages
  • Extensibility Component Model (MEF)

At this point, Scott also starts dabbling in the but wait, there’s more world, as he demoed Visual Studio 2010:

  • Much better design-time support for WPF
  • Visual Studio itself now rewritten in WPF
  • Multi-monitor support
  • More re-factoring support
  • Better support for Test Driven Development workflow
  • Can easily create plugins using MEF

Whew.  Now he got to the truly sexy part—probably the section of the keynote that got the biggest reaction out of the developer crowd.  Scott showed off a little “third party” Visual Studio plug-in that pretty-formatted XML comments (e.g. function headers) as little graphical WPF widgets.  Even better, the function headers, now graphically styled, also contained hot links right into a local bug database.  Big cheers.

Sean’s prediction—this will lead to a new ecosystem for Visual Studio plugins and interfaces to other tools.

Another important takeaway—MEF, the new extensibility framework, isn’t just for Visual Studio.  You can also use MEF to extend your own applications, creating your own framework.

Tesco.com Demo of Rich WPF Client Application

Here we got our obligatory partner demo, as a guy from Tesco.com showed off their snazzy application that allowed users to order groceries.  Lots of 2D and 3D graphical effects—one of the more compelling WPF apps that I’ve seen demoed.

Scott Redux

Scott came back out to talk a bit about new and future offerings on the web development side of things.

Here are some of the ASP.NET improvements that were delivered with .NET 3.5 SP1:

  • Dynamic Data
  • REST support
  • MVC (Model-View-Controller framework)
  • AJAX / jQuery  (with jQuery intellisense in Visual Studio 2008)

ASP.NET 4 will include:

  • Web Forms improvements
  • MVC improvements
  • AJAX improvements
  • Richer CSS support
  • Distributed caching

Additionally, Visual Studio 2010 will include better support for web development:

  • Code-focused improvements  (??)
  • Better JavaScript / AJAX tooling
  • Design View CSS2 support
  • Improved publishing and deployment

Scott then switched gears to talk about new and future offerings for Silverlight.

Silverlight 2 was just RTM’d two weeks ago.  Additionally, Scott presented two very interesting statistics:

  • Silverlight 1 is now present on 25% of all Internet-connected machines
  • Silverlight 2 has been downloaded to 100 million machines

IIS will inherit the adaptive (smooth) media streaming that was developed for the NBC Olympics web site.  This is available today.

A new Silverlight toolkit is being released today, including:

  • Charting controls, TreeView, DockPanel, WrapPanel, ViewBox, Expander, NumericUpDown, AutoComplete et al
  • Source code will also be made available

Visual Studio 2010 will ship with a Silverlight 2 designer, based on the existing WPF designer.

We should also expect a major release of Silverlight next year, including things like:

  • H264 media support
  • Running Silverlight applications outside of the browser
  • Richer graphics support
  • Richer data-binding support

Whew.  Take a breath..

David Treadwell – Live Services

While we were all still reeling from Scott Gu’s segment, David Treadweall, Corporate VP, Live Platform Services at Microsoft, came out to talk about Live Services.

The Live Services offerings are basically a set of services that allow applications to interface with the various Windows Live properties.

The key components of Live Services are:

  • Identity – Live ID and federated identity
  • Directory – access to social graph through a Contacts API
  • Communication & Presence – add Live Messenger support directly to your web site
  • Search & Geo-spatial – including mashups on your web sites

The Live Services are all made available via standards-based protocols.  This means that you can invoke them from not only the .NET development world, but also from other development stacks.

David talked a lot about Live Mesh, a key component of Live Services:

  • Allows applications to bridge Users / Devices / Applications
  • Data synchronization is a core concept

Applications access the various Live Services through a new Live Framework:

  • Set of APIs that allow apps to get at Live Services
  • Akin to CLR in desktop environment
  • Live Framework available from PC / Web / Phone applications
  • Open protocol, based on REST, callable from anything

Ori Amiga Demo

Ori Amiga came out to do a quick demonstration of how to “Meshify” an existing application.

The basic idea of Mesh is that it allows applications to synchronize data across all of a user’s devices.  But importantly, this means—for users who have already signed up for Live Mesh.

Live Mesh supports storing the user’s data “in the cloud”, in addition to on the various devices.  But this isn’t required.  Applications could use Mesh merely as a transport mechanism between instances of the app on various devices.

Takeshi Numoto – The Closer

Finally, Takeshi Numoto, GM, Office Client at Microsoft, came out to talk about Office 14.

Office 14 will deliver Office Web Applications—lightweight versions of the various Office applications that run in a browser.  Presumably they can also store all of their data in the cloud.

Takeshi then did a demo that focused a bit more on the collaboration features of Office 14 than on the ability to run apps in the browser.  (Running in the browser just works and the GUI looks just like the rich desktop-based GUI).

Takeshi showed off some pretty impressive features and use cases:

  • Two users editing the same document at the same time, both able to write to it
  • As users change pieces of the document, little graphical widget shows up on the other user’s screen, showing what piece the first user is currently changing.  All updated automatically, in real-time
  • Changes are pushed out immediately to other users who are viewing/editing the same document
  • This works in Word, Excel, and OneNote  (at least these apps were demoed)
  • Can publish data out to data stores in Live Services

Ray’s Wrapup

Ray Ozzie came back out to wrap everything up.  He pointed out that everything we’d seen today was real.  He also pointed out that some of these technologies were more “nascent” than others.  In other words—no complaints if some bits don’t work perfectly.  It seemed an odd note to end on.

Session – Microsoft Sync Framework Advances

PDC 2008, Day #1, Session #3, 1 hr 15 min.

Lev Novik

My next session talked about the next version of something called the Microsoft Sync Framework.  I’d never heard of the framework, prior to the talk.  And I’m still not exactly sure where it fits into the full family of Microsoft’s cloud-based services and tools.

My best guess for now is that the Sync Framework basically lives in the Live Services arena, where it supports Live Mesh.

My understanding of where the Sync Framework fits in goes something like this.  You can write an application against the Mesh Framework, allowing you to take advantage of the support for feeds and synchronization that are part of that framework.  And if you wanted data synchronization between applications that run exclusively in this framework, you’d stop there.  But if you want to synchronize data between legacy applications that were not written to be Mesh apps, you’d use the Sync Framework to write a custom sync provider for that legacy application.  This would allow that application’s data to sync up with other Sync Framework providers, or with Mesh applications.

That’s my current take on where the Sync Framework fits in, but I’ll maybe learn a bit more in later sessions.  One thing that wasn’t clear from Lev’s talk was whether Mesh itself is using the Sync Framework under the covers, or whether it’s something entirely different.

Lev talked very fast and sounded just a little bit like Carl Sagan.  But once I got past that, he did a decent job of showing how sync providers work to synch up various types of data.

The basic takeaway here goes something like this:

  • Data synchronization done properly is tough
  • The Sync Framework gives you all the hooks for handling the complexity

Lev pointed out that getting the first 80% of data synchronization working is fairly easy.  It’s the remaining 20%, when you start dealing with various conflicts, that can get tricky.

V1 of the Sync Framework has built-in providers for File Systems and Relational Databases.  Version 2 adds support for SQL Data Services and for Live Mesh.

The bulk of Lev’s talk was a demo where he showed the underlying code in a simple sync provider, synching data between his local file system, his Live Mesh data store, and a SmugMug account.

The synchronization was pretty cool and worked just like you’d expect it to.  Throughout the demo, Lev was doing a 3-way synchronization and files added, updated, or deleted in one of the endpoints would be replicated at the other two.

Lev also talked a fair bit about how you deal with conflicts in the sync framework.  Conflicts are things like concurrent updates—what do you do when two endpoints change the same file at the same time?  Lev demonstrated how conflicts are resolved in the sync providers that you write, using whatever logic the provider wants to use.

Takeaways

The Sync Framework seems a very powerful framework for supporting data synchronization across various data stores.  Out of the box, the framework supports synching between a number of popular stores, like relational databases, file systems, and Mesh applications.  And the ability to write your own sync providers is huge.  It means that you can implement synchronization across the web between almost any type of application or data store that you can imagine.