Session – Live Services: Live Framework Programming Model Architecture and Insights

PDC 2008, Day #3, Session #1, 1 hr 15 mins

Ori Amiga

My next session dug a bit deeper into the Live Framework and some of the architecture related to building a Live Mesh application.

Ori Amiga was presenting, filling in for Dharma Shukla (who just became a new Dad).

Terminology

It’s still a little unclear what terminology I should be using.  In some areas, Microsoft is switching from “Mesh” to just plain “Live”.  (E.g. Mesh Operating Environment is now Live Operating Environment).  And the framework that you use to build Mesh applications is the Live Framework.  But they are still very much talking about “Mesh”-enabled applications.

I think that the way to look at is this:

  • Azure is the lowest level of cloud infrastructure stuff
  • The Live Operating Environment runs on top of Azure and provides some basic services useful for cloud applications
  • Mesh applications run on top of the LOE and provide access to a live.com user’s “mesh”: devices, applications and data that lives inside their mesh

I think that this basically means that you could have an application that makes use of the various Live Services in the Live Operating Environment without actually being a Mesh application.  On the other hand, some of the services in the LOE don’t make any sense to non-Mesh apps.

Live Operating Environment  (LOE)

Ori reviewed the Live Operating Environment, which is the runtime that Mesh applications run on top of.  Here’s a diagram from Mary Jo Foley’s blog:

This diagram sort of supports my thought that access to a user’s mesh environment is different from the basic stuff provide in the LOE.  According to this particular view, Live Services are services that provide access to the “mesh stuff”, like their contact lists, information about their devices, the data stores (data stored in the mesh or out on the devices), and other applications in that user’s mesh.

The LOE would contain all of the other stuff—basic a set of utility classes, akin to the CLR for desktop-based applications.  (Oh wait, Azure is supposed to be “akin to the CLR”). *smile*

Ori talked about a list of services that live in the LOE, including:

  • Scripting engine
  • Formatters
  • Resource management
  • FSManager
  • Peer-to-peer communications
  • HTTP communications
  • Application engine
  • Apt(?) throttle
  • Authentication/Authorization
  • Notifications
  • Device management

Here’s another view of the architecture (you can also find it here).

Also, for more information on the Live Framework, you can go here.

Data in the Mesh

Ori pointed out an important point about how Mesh applications access their data.  If you have a Mesh client running on your local PC, and you’ve set up its associated data store to synch between the cloud and that device, the application uses local data, rather than pulling data down from the cloud.  Because it’s working entirely with locally cached data, it can run faster than the corresponding web-based version (e.g. running in the Live Desktop).

Resource Scripts

Ori talked a lot about resource scripts and how they might be used by a Mesh-enabled application.  An application can perform actions in the Mesh using these resource scripts, rather than performing actions directly in the code.

The resource scripting language contains things like:

  • Control flow statements – sequence and interleaving, conditionals
  • Web operation statements – to issue HTTP POST/PUT/GET/DELETE
  • Synchronization statements – to initiate data synchronization
  • Data flow constructs – for binding statements to other statements(?)

Ori did a demo that showed off a basic script.  One of the most interesting things was how he combined sequential and interleaved statements.  The idea is that you specify what things you need to do in sequence (like getting a mesh object and then getting its children), and what things you can do in parallel (like getting a collection of separate resources).  The parallelism is automatically taken care of by the runtime.

Custom Data

Ori also talked quite a bit about how an application might view its data.  The easiest thing to do would be to simply invent your own schema and then be the only app that reads/writes the data in that schema.

A more open strategy, however, would be to create a data model that other applications could use.  Ori talked philosophically here, arguing that this openness serves to improve the ecosystem.  If you can come up with a custom data model that might be useful to other applications, they could be written to work with the same data that your application uses.

Ori demonstrated this idea of custom data in Mesh.  Basically you create a serializable class and then mark it up so that it gets stored as user data within a particular DataEntry.  (Remember: Mesh objects | Data feeds | Data entries).

This seems like an attractive idea, but it seems a bit clunky.  The custom data is embedded into the standard AtomPub stream, but not in a queryable way.  It looked more like it was jammed into an XML element in the <DataEntry> element.  This means that your custom data items would not be directly queryable.

Ori did go on to admit that custom data isn’t queryable or indexable, but really only for “lightweight data”.  This is really at odds with the philosophy of a reusable schema for other applications.

Tips & Tricks

Finally, Ori presented a handful of tips & tricks for working with Mesh applications:

  • To clean out local data cache, just delete the DB/MR/Assembler directories and re-synch
  • Local metadata is actually sotred in SQL Server Express.  Go ahead and peek at it, but be careful not to mess it up.
  • Use the Resource Model Browser to really see what’s going on under the covers.  What it shows you represents the truth of what’s happening between the client and the cloud
  • One simple way to track synch progress is to just look at the size of the Assembler and MR directories
  • Collect logs and send to Microsoft when reporting a problem

Summary

Ori finished with the following summary:

  • Think of the cloud as just a special kind of device
  • There is a symmetric cloud/client programming model
  • Everything is a Resource

Session – Live Services: What I Learned Building My First Mesh Application

PDC 2008, Day #2, Session #1, 45 mins

Don Gillett

In my next session, Don Gillett explained how he wrote one of the first Live Mesh applications.  Jot is a little note-taking application that allows synchronizing simple lists across instances running on a PC, a phone, or as a “Mesh application” running in your web browser.

Mesh is all about data synchronization and Jot demonstrates how this works.  To start with, the following preconditions must all be set up:

  • User has created a Live Mesh account  (this is free and comes with a free/default amount of storage)
  • Jot has been installed to the PC and the phone
  • Jot has been installed in the Mesh desktop as a “Mesh application”

At this point, Jot will be able to take advantage of the Mesh platform to automatically synchronize its lists with all three endpoints.

Here’s one possible scenario:

  • John is at home, fires up Jot on his PC and adds “toilet paper” to the “Groceries” list in Jot
  • Data is automatically synchronized to the other devices in the Mesh—Jot running on the Mesh desktop and John’s wife’s phone
  • John’s wife stops at the grocery store on the way home, fires up Jot on her phone and gets the current grocery list, which now includes toilet paper

Jot is a very simple application, but it demonstrates the basics of how the Mesh platform works.  It’s primary goal is to synchronize data across multiple devices.

Out of the box, you can synchronize data without creating or running Mesh applications.  You just create a data folder up on the Mesh desktop and then set it up to synchronize on the desired devices.  Changes made to files in the corresponding folders on any of the devices will be automatically pushed out to the other devices, as well as to the folder on the Mesh desktop.

In this out-of-the-box data synch scenario, because the data lives in the Mesh desktop, this supports two main things:

  • Data is always being backed up “to the cloud” because it’s stored in the Mesh
  • You can access the data from any public web access point by simply going to the Mesh web site

Writing Your Own Mesh-Enabled Application

Applications written to support Mesh can take advantage of this cross-device synchronization, but only for users who have signed up for a Live Mesh account.

Because this technology is useful only for users who are using Mesh, it remains to be seen how widespread it will be.  If Live Services and Live Mesh aren’t widely adopted, the ecosystem for Mesh applications will be equally limited.

But if do you write an application targeted at Mesh users, the API for subscribing to and publishing Mesh data is very easy to use.  It is simply another .NET namespace that you can get at.  And if you’re writing non-.NET applications, all of the same functionality is available through HTTP, using a REST interface.

What Types of Data Can You Synchronize?

It’s important to remember the Mesh applications aren’t just limited to accessing shared files in shared folders.  Every Mesh application can basically be thought of as a custom data provider and data consumer.  This means two things:

  • It can serve data up to the mesh in any format desired (not just files)
  • That format is understood by the same app running on other devices

As an example, your Mesh application might serve up metadata about files, the files themselves, and combine it with other data local to that device.

The Data Model

Don talked about the underlying data model that your application uses when publishing or consuming data.  It looks something like this:

Mesh Objects

Data Feeds

Data Entries — optionally pointing to Enclosures

Mesh Objects are the top-level shareable units that your Mesh application traffics in.  They can contain one or more Data Feeds.  If they are too large, synchronization will be too costly, because they will change too often.  If they are too small, or fine-grained, users may get shared data that is incomplete.  What’s important is that a Mesh application gets to decide what data units it wants to serve up to the mesh.

Data Feeds represent a collection of data.

Data Entries are the individual chunks of data within a Data Feed.  Each is uniquely identified.  These are the smallest logical storage units and the chunks that will be synchronized using the underlying synchronization technology.

Enclosures are used when your data grows too large to store directly in an text-based feed.  They are owned by and pointed to by an associated Data Entry.  Media files are an example of data that would be stored in an Enclosure, rather in the Data Entry itself.

The Evolution of a Meshified Application

Don presented a nice plan for how you might start with a simple application and then work towards making it a fully Mesh-aware application:

  1. Persist (read/write) data to a local file
  2. Read/write data from/to a local DataFeed object
  3. Utilize FeedSync to read/write your data from/to the Mesh “cloud”

Don then walked through an example, building a tiny application that would synch up a list of simple “checklist” objects, containing just a string and an IsChecked Boolean.

Final Thoughts

Don mentioned additional Live Services APIs that your application can take advantage of, in addition to data synchronization:

  • Live ID
  • Authentication
  • Contact lists
  • News Feeds  (who is changing what in Mesh)

He also mentioned a couple of tools that are very useful when writing/debugging Mesh applications:

  • Resource Model Browser  (available in the Live Framework SDK)
  • Fiddler 2, third-party web traffic logger, from fiddlertool.com

Drawing a Cube in WPF

It’s time to draw a simple 3D object using WPF.  As a quick introduction to 3D graphics in WPF, let’s just render one of the simplest possible objects—a cube.

In this example, I’ll define everything that we need directly in XAML.  As with everything else in WPF, we could do all this directly in code.  But defining everything in the XAML is a bit cleaner, in that it makes the object hierarchy more obvious.  In a real-world project, you’d obviously do some of this in code, e.g. the creation or loading of the 3D mesh (the object that we want to display).

Let’s start with the final XAML.  Here are the full contents of the Window1.xaml file:

<Window x:Class="SimpleCube.Window1"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    Title="Window1" Height="398" Width="608"
    <Grid>
        <Viewport3D Name="viewport3D1">
            <Viewport3D.Camera>
                <PerspectiveCamera x:Name="camMain" Position="6 5 4" LookDirection="-6 -5 -4">
                </PerspectiveCamera>
            </Viewport3D.Camera>
            <ModelVisual3D>
                <ModelVisual3D.Content>
                    <DirectionalLight x:Name="dirLightMain" Direction="-1,-1,-1">
                    </DirectionalLight>
                </ModelVisual3D.Content>
            </ModelVisual3D>
            <ModelVisual3D>
                <ModelVisual3D.Content>
                    <GeometryModel3D>
                        <GeometryModel3D.Geometry>
                            <MeshGeometry3D x:Name="meshMain"
                                Positions="0 0 0  1 0 0  0 1 0  1 1 0  0 0 1  1 0 1  0 1 1  1 1 1"
                                TriangleIndices="2 3 1  2 1 0  7 1 3  7 5 1  6 5 7  6 4 5  6 2 0  2 0 4  2 7 3  2 6 7  0 1 5  0 5 4">
                            </MeshGeometry3D>
                        </GeometryModel3D.Geometry>
                        <GeometryModel3D.Material>
                            <DiffuseMaterial x:Name="matDiffuseMain">
                                <DiffuseMaterial.Brush>
                                    <SolidColorBrush Color="Red"/>
                                </DiffuseMaterial.Brush>
                            </DiffuseMaterial>
                        </GeometryModel3D.Material>
                    </GeometryModel3D>
                </ModelVisual3D.Content>
            </ModelVisual3D>
        </Viewport3D>
    </Grid>
</Window>

The basic idea here is that we need a Viewport3D object that contains everything required to render our cube.  The simplified structure, showing the Viewport3D and its child objects, is:

Viewport3D
    ModelVisual3D   (defines lighting)
        DirectionalLight
    ModelVisual3D   (defines object to render)
        GeometryModel3D
            MeshGeometry3D
            DiffuseMaterial

Here’s what each of these objects is responsible for:

  • Viewport3D – A place to render 3D stuff
  • ModelVisual3D – A 3D object contained by the viewport, either a light or a geometry
  • DirectionalLight – A light shining in a particular direction
  • GeometryModel3D – A 3D geometrical object
  • MeshGeometry3D – The set of triangles that defines a 3D object
  • DiffuseMaterial – Material used to render a 3D object, e.g. a brush

Perhaps the most interesting of these classes is the MeshGeometry3D.  A “mesh” basically consists of a series of triangles, typically all connected to form the 3D object that you want to render.  The MeshGeometry3D object defines a mesh by specifying a series of points and then a collection of triangles.  The collection of points represent all of the vertexes in the mesh and are defined by the Positions property.  The triangles, stored in the TriangleIndices property, are defined in terms of the points, using indexes into the Positions collection.

This seems a bit odd at first.  Why not just define a collection of triangles, each consisting of three points?  Why define the points as a separate collection and then define the triangles by referencing the points?  The answer is that this scheme allows reusing a single point in multiple triangles.

In our case, drawing a cube, we define eight points, for the eight vertexes of the cube.  The image below shows the points numbered from 0-7, matching the order that we add them to Positions.  The back left corner of the cube is located at (0, 0, 0).

After defining the points, we define the 12 triangles that make up the surface cube—two triangles per face.  We define each triangle by just listing the indexes of the three points that make up the triangle.

It’s also important to pay attention to the order that we list the indexes for each triangle.  The order dictates the direction of a vector normal to the triangle, which indicates which side of the triangle that we can see.  The rule is: add vertexes counter-clockwise, as you look at the visible face of the triangle.

In addition to the mesh, we define a material used to render the cube.  In this case, it’s a DiffuseMaterial, which allows painting the surface of the cube with a simple brush.

We also need to add a camera to our scene, by specifying where it is located and what direction it looks in.  In order to see our cube, we put the camera at (6, 5, 4) and then set its LookDirection, a vector, to (-6, -5, -4) so that it looks back towards the origin.

Finally, in order to see the cube, we need lighting.  We define a single light source, which is a DirectionalLight—a light that has no position, but just casts light in a particular direction.

The final result is a simple red cube.