Azure Pricing for Web and Worker Roles

Here’s a simple chart showing current Azure pricing for web and worker roles.  The chart shows the per-month cost based on the # of instances and a particular instance size.

 

Instance Size
# instances XS S Med Lg Xlarge
1 $14.40 $86.40 $172.80 $345.60 $691.20
2 $28.80 $172.80 $345.60 $691.20 $1,382.40
3 $43.20 $259.20 $518.40 $1,036.80 $2,073.60
4 $57.60 $345.60 $691.20 $1,382.40 $2,764.80
5 $72.00 $432.00 $864.00 $1,728.00 $3,456.00
6 $86.40 $518.40 $1,036.80 $2,073.60 $4,147.20
7 $100.80 $604.80 $1,209.60 $2,419.20 $4,838.40
8 $115.20 $691.20 $1,382.40 $2,764.80 $5,529.60
9 $129.60 $777.60 $1,555.20 $3,110.40 $6,220.80
10 $144.00 $864.00 $1,728.00 $3,456.00 $6,912.00
11 $158.40 $950.40 $1,900.80 $3,801.60 $7,603.20
12 $172.80 $1,036.80 $2,073.60 $4,147.20 $8,294.40

 

 

Here’s a chart that shows the same data.

 

Advertisements

Session – Services Symposium: Enterprise Grade Cloud Applications

PDC 2008, Day #4, Session #2, 1 hr 30 mins

Eugenio Pace

My second session on Thursday was a continuation of the cloud services symposium from the first session.  There was a third part to the symposium, which I did not attend.

The presenter for this session, Eugenio, was not nearly as good a presenter as Gianpaolo from the previous session.  So it was a bit less dynamic, and harder to stay interested.

The session basically consisted of a single demo, which illustrated some of the possible solutions to the Identity, Monitoring, and Integration challenges mentioned in the previous session.

Identity

Eugenio pointed out the problems involved in authentication/authorization.  You don’t want to require the enterprise users to have a unique username/password combination for each service that they use.  And pushing out the enterprise (e.g. Active Directory) credential information to the third party service is not secure and creates a management nightmare.

The proposed solution is to use a central (federated) identity system to do the authentication and authorization.  This is the purpose of the Azure Access Control service.

Management

The next part of the demo showed how Azure supports remote management, on the part of IT staff at an individual customer site, of their instance of your application.  The basic things that you can do remotely include:

  • Active real-time monitoring of application health
  • Trigger administrative actions, based on the current state

The end result (and goal) is that you have the same scope of control over your application as you’d have if it were on premises.

Application Integration

Finally, Eugenio did some demos related to “process integration”—allowing your service to be called from a legacy service or system.  This demo actually woke everyone up, because Eugenio brought an archaic green-screen AS400 system up in an emulator and proceeded to have it talk to his Azure service.

Takeaways

The conclusions were recommendations to both IT organizations and ISVs:

  • Enterprise IT organization
    • Don’t settle for sub-optimal solutions
    • Tap into the benefits of Software+Services
  • ISV
    • Don’t give them an excuse to reject your solution
    • Make use of better tools, frameworks, and services

Session – Services Symposium: Expanding Applications to the Cloud

PDC 2008, Day #4, Session #1, 1 hr 30 mins

Gianpaolo Carraro

As the last day of PDC starts, I’m down to four sessions to go.  I’ll continue doing a quick blog post on each session, where I share my notes, as well as some miscellaneous thoughts.

The Idea of a Symposium

Gianpaolo started out by explaining that they were finishing PDC by doing a pair of symposiums, each a series of three different sessions.  One symposium focused on parallel computing and the other on cloud-based services.  This particular session was the first in the set of three that addressed cloud services.

The idea of a symposium, explained Gianpaolo, is to take all of the various individual technologies and try to sort of fit the puzzle pieces together, providing a basic context.

The goal was also present some of the experience that Microsoft has gained in early usage of the Azure platform over the past 6-12 months.  He said that he himself has spent the last 6-12 months using the new Services, so he had some thoughts to share.

This first session in the symposium focused on taking existing business applications and expanding them to “the cloud”.  When should an ISV do this?  Why?  How?

Build vs. Buy and On-Premises vs. Cloud

Gianpaolo presented a nice matrix showing the two basic independent decisions that you face when looking for software to fulfill a need.

  • Build vs. Buy – Can I buy a packaged off-the-shelf product that does what I need?  Or are my needs specialized enough that I need to build my own stuff?
  • On-Premises vs. Cloud – Should I run this software on my own servers?  Or host everything up in “the cloud”?

There are, of course, tradeoffs on both sides of each decision.  These have been discussed ad infinitum elsewhere, but the basic tradeoffs are:

  • Build vs. Buy – Features vs. Cost
  • On-Premises vs. Cloud – Control vs. Economy of Scale

Here’s the graph that Gianpaolo presented, showing six different classes of software, based on how you answer these questions.  Note that on the On-Premises vs. Cloud scale, there is a middle column that represents taking applications that you essentially control and moving them to co-located servers.

This is a nice way to look at things.  It shows that, for each individual software function, it can live anywhere on this graph.  In fact, Gianpaolo’s main point is that you can deploy different pieces of your solution at different spots on the graph.

So the idea is that while you might start off on-premises, you can push your solution out to either a co-located hosting server or to the cloud in general.  This is true of both packaged apps as well as custom-developed software.

Challenges

The main challenge in moving things out of the enterprise is dealing with the various issues that show up now when your data needs to cross the corporate/internet boundary.

There are several separate types of challenges that show up:

  • Identify challenges – as you move across various boundaries, how does the software know who you are and what you’re allowed to access?
  • Monitoring and Management challenges – how do you know if your application is healthy, if it’s running out in the cloud?
  • Application Integration challenge – how do various applications communicate with each other, across the various boundaries?

Solutions to the Identity Problem

Gianpaolo proposed the following possible solutions to this problem of identity moving across the different boundaries:

  • Federated ID
  • Claim-based access control
  • Geneva identity system, or Cardspace

The basic idea was that Microsoft has various assets that can help with this problem.

Solutions to the Monitoring and Management Problem

Next, the possible solutions to the monitoring and management problem included:

  • Programmatic access to a “Health” model
  • Various management APIs
  • Firewall-friendly protocols
  • Powershell support

Solutions to the Application Integration Problem

Finally, some of the proposed solutions to the application integration problem included:

  • ServiceBus
  • Oslo
  • Azure storage
  • Sync framework

The ISV Perspective

The above issues were all from an IT perspective.  But you can look at the same landscape from the perspective of an independent software vendor, trying to sell solutions to the enterprise.

To start with, there are two fundamentally different ways that the ISV can make use of “the cloud”:

  • As a service mechanism, for delivering your services via the cloud
    • You make your application’s basic services available over the internet, no matter where it is hosted
    • This is mostly a customer choice, based on where they want to deploy
  • As a platform
    • Treating the cloud as a platform, where your app runs
    • Benefits are the economy of scale
    • Mostly an ISV choice
    • E.g. you could use Azure without your customer even being aware of it

When delivering your software as a service, you need to consider things like:

  • Is the feature set available via cloud sufficient?
  • Firewall issues
  • Need a management interface for your customers

Some Patterns

Gianpaolo presented some miscellaneous design considerations and patterns that might apply to applications deployed in the cloud.

Cloudbursting

  • Design for average load, handling the ‘peak’ as an exception
  • I.e. only go to the cloud for scalability when you need to

Worker / Queue / Blob Pattern

Let’s say that you have a task like encoding and publishing of video.  You can push the data out to the cloud, where the encoding work happens.  (Raw data places in a “blob” in cloud storage).  You then add an entry to a queue, indicating that there is work to be done, and a separate worker process eventually does the encoding work.

This is a nice pattern for supporting flexible scaling—both the queues and the worker processes could be scaled out separately.

CAP: Pick 2 out of 3

  • Consistency
  • Availability
  • Tolerance to network Partitioning

Eventual Consistency (ACID – BASE)

The idea here is that we are all used to the ACID characteristics listed below.  We need to guarantee that the data is consistent and correct—which means that performance likely will suffer.  As an example, we have a process submit data synchronously because we need to guarantee that the data gets to its destination.

But Gianpaolo talked about the idea of “eventual consistency”.  For most applications, while it’s important for your data to be correct and consistent, it’s not necessarily for it to be consistent right now.  This leads to a model that he referred to as BASE, with the characteristics listed below.

  • ACID
    • Atomicity
    • Consistency
    • Isolation
    • Durability
  • BASE
    • Basically Available
    • Soft state
    • Eventually consistent

Fundamental Lesson

Basically the main takeaway is:

  • Put the software components in the place that makes the most sense, given their use

Session – Windows Azure Tables: Programming Cloud Table Storage

PDC 2008, Day #3, Session #3, 1 hr 15 mins

Pablo Castro, Niranjan Nilakantan

Pablo and Niranjan did a session that went into some more detail on how the Azure Table objects can be used store data in the cloud.

Context

This talk dealt with the “Scalable Storage” part of the new Azure Services platform.  Scalable Storage is a mechanism by which applications can store data “in the cloud” in a highly scalable manner.

Data Types

There are three fundamental data types available to applications using Azure Storage Services:

  • Blobs
  • Tables
  • Queues

This session focused mainly on Tables.  Specifically, Niranjan and Pablo addressed the different ways that an application might access the storage service programmatically.

Tables

Tables are a “massively scalable” data type for cloud-based storage.  They are able to store billions of rows, are highly available, and “durable”.  The Azure platform takes care of scaling out the data automatically to multiple servers, if necessary.  (With some hints on the part of the developer).

Programming Model

Azure Storage Services are accessed through the ADO.NET Data Services (Astoria).  Using ADO.NET Data Sercices, there are basically two ways for an application to access the service.

  • .NET API   (System.Data.Services.Client)
  • REST interface   (using HTTP URIs directly)

Data Model

It’s important to note that Azure knows nothing about your data model.  It does not store data in a relational database or access it via a relational model.  Rather, you specify a Table that you’d like to store data in, along with a simple query expression for the data that you’d like to retrieve.

A Table represents a single Entity and is composed of a collection of rows.  Each row is uniquely defined by a Row Key, which the developer specifies.  Additionally, the developer specifies a Partition Key, which is used by Azure in knowing how to split the data across multiple servers.

Beyond the Record Key and Partition Key, the developer can add any other properties that she likes, up to a total of 255 properties.  While the Record and Partition Keys must be string data types, the other properties support other data types.

Partitioning

Azure storage services are meant to be automatically scalable, meaning that the data will be automatically spread across multiple servers, as needed.

In order to know how to split up the data, Azure uses a developer-specified Partition Key, which is one of the properties of each record.  (Think “field” or “column”).

The developer should pick a partition key that makes sense for his application.  It’s important to remember two things:

  • Querying for all data having a single value for a partition key is cheap
  • Querying for data having multiple partition key values is more expensive

For example, if your application often retrieves data by date and shows data typically for a single day, then it would make sense to have a CurrentData property in your data entity and to make that property the Partition Key.

The way to think of this is that each possible unique value for a Partition Key represent a “bucket” that will contain one or more records.  If you pick a key that results in only one record per bucket, that would be inefficient.  But if you pick a key that results in a set of records in the bucket that you are likely to ask for together, this will be efficient.

Accessing the Data Programmatically

Pablo demonstrated creating the classes required to access data stored in an Azure storage service.

He started by creating a class representing the data entity to be stored in a single table.  He selected and defined properties for the Partition and Record key, as well as other properties to store any other desired data in.

Pablo also recommended that you create a single class to act as an entry point into the system.  This class then acts as a service entry point for all of the data operations that your client application would like to perform.

He also demonstrated using LINQ to run queries against the Azure storage service.  LINQ automatically created to corresponding URI to retrieve, create, update, or delete the data.

Miscellaneous

Pablo and Niranjan also touched on a few other issues that most applications will deal with:

  • Dealing with concurrent updates  (uses Etag and if-match)
  • Pagination  (using continuation tokens)
  • Using Azure Queues for pseudo-transactional deletion of data

Takeaways

Pablo and Niranjan demonstrated that it was quite straightforward to access Azure storage services from a .NET application.  It’s also the case that non-.NET stacks could make use of the same services using a simple REST protocol.

It was also helpful to see how Pablo used ADO.NET Data Services to construct a service layer on top of the Azure storage services.  This seems to make consuming the data pretty straightforward.

(I still might have this a little confused—it’s possible that Astoria was just being used to wrap Azure services, rather than exposing the data in an Astoria-based service to the client.  I need to look at the examples in a little more detail to figure this out).

Original Materials

You can find the video of the session at:  http://mschnlnine.vo.llnwd.net/d1/pdc08/WMV-HQ/ES07.wmv

Keynote #3 – Box, Anderson

PDC 2008, Day #2, Keynote #3, 1.5 hrs

Don Box, Chris Anderson

Don and Chris’ presentation was notable in that they did not use a single Powerpoint slide.  The entire presentation consisted of Don and Chris writing code in Visual Studio.

This will be a short post because Don and Chris went so fast and covered so much stuff that I didn’t make any attempt to take notes on specific techniques.

Their basic demo was to build a simple Azure (cloud-based) service that served up a list of processes running on the server.  They started by writing this service as a “classic” local WCF service, then publishing it on the ServiceBus, to make it available on the cloud.  And finally, they deployed the service into the cloud so that it was being hosted by the Azure servers.

The most important takeaway from this talk was that the service that they wrote was at every point accessible through standards-based protocols.  Azure-based services are accessible through a simple “REST-ful” interface.  The idea of REST is to use the existing GET/PUT/POST/DELETE verbs in HTTP to execute corresponding Read/Update/Create/Delete data operations on the server side.

This session was the hardest to keep up with, mentally, but also the most fun.  Don and Chris traded off talking and coding.  Both were equally quick at talking through explanations of what they’d do next, in moving from a local service to a service published up in the Azure cloud.  And both were equally quick to write the corresponding code in Visual Studio.  They probably used fewer code snippets than any of the other presenters so far at PDC.

All in all, an excellent session, though it did make my head spin.

Keynote #2 – Ozzie, Sinofsky, Guthrie, Treadwell

PDC 2008, Day #2, Keynote #2, 2 hrs

Ray Ozzie, Steven Sinofsky, Scott Guthrie, David Treadwell

Wow.  In contrast to yesterday’s keynote, where Windows Azure was launched, today’s keynote was the kind of edge-of-your-seat collection of product announcements that explain why people shell out $1,000+ to come to PDC.  The keynote was a 2-hr extravaganza of non-stop announcements and demos.

In short, we got a good dose of Windows 7, as well as new tools in .NET 3.5 SP1, Visual Studio 2008 SP1 and the future release of Visual Studio 2010.  Oh yeah—and an intro to Office 14, with online web-based versions of all of your favorite Office apps.

Not to mention a new Paint applet with a ribbon interface.  Really.

Ray Ozzie Opening

The keynote started once again today with Ray Ozzie, reminding us of what was announced yesterday—the Azure Services Platform.

Ray pointed out that while yesterday focused on the back-end, today’s keynote would focus on the front-end: new features and technologies from a user’s perspective.

He pointed out that the desktop-based PC and the internet are still two completely separate world.  The PC is where we sit when running high-performance close-to-the-metal applications.  And the web is how we access the rest of the world, finding and accessing other people and information.

Ray also talked about the phone being the third main device where people spend their time.  It’s always with us, so can respond to our spontaneous need for information.

The goal for Microsoft, of course, is that applications try to span all three of these devices—the desktop PC, the web, and the phone.  The apps that can do this, says Ozzie, will deliver the greatest value.

It’s no surprise either that Ray mentioned Microsoft development tools as providing the best platform for developing these apps that will span the desktop/web/phone silos.

Finally, Ray positioned Windows 7 as being the best platform for users, since we straddle these three worlds.

Steven Sinofsky

Next up was Steven Sinofsky,Senior VP for Windows and Windows Live Engineering Group at Microsoft. Steven’s part of the keynote was to introduce Windows 7.  Here are a couple of tidbits:

  • Windows 7 now in pre-beta
  • Today’s pre-beta represents “M3”—a feature-complete milestone on the way to Beta and eventual RTM.  (The progression is M1/M2/M3/M4/Beta)
  • The beta will come out early in 2009
  • Release still targeted at 3-yrs after the RTM of Vista, putting it at November of 2009
  • Server 2008 R2 is also in pre-beta, sharing its kernel with Windows 7

Steven mentioned three warm-fuzzies that Windows 7 would focus on:

  • Focus on the personal experience
  • Focus on connecting devices
  • Focus on bringing functionality to developers

Julie Larson-Green — Windows 7 Demo

Next up was Julie Larson-Green, Corporate VP, Windows Experience.  She took a spin through Windows 7 and showed off a number of the new features and capabilities.

New taskbar

  • Combines Alt-Tab for task switching, current taskbar, and current quick launch
  • Taskbar includes icons for running apps, as well as non-running (icons to launch apps)
  • Can even switch between IE tabs from the taskbar, or close tabs
  • Can close apps directly from the taskbar
  • Can access app’s MRU lists from the taskbar (recent files)
  • Can drag/dock windows on desktop, so that they quickly take exactly half available real estate

Windows Explorer

  • New Libraries section
    • A library is a virtual folder, providing access to one or more physical folders
    • Improved search within a library, i.e. across a subset of folders

Home networking

  • Automatic networking configuration when you plug a machine in, connecting to new “Homegroup”
  • Automatic configuration of shared resources, like printers
  • Can search across entire Homegroup (don’t need to know what machine a file lives on)

Media

  • New lightweight media player
  • Media center libraries now shared & integrated with Windows Explorer
  • Right-click on media and select device to play on, e.g. home stereo

Devices

  • New Device Stage window, summarizing all the operations you can perform with a connected device (e.g. mobile device)
  • Configure the mobile device directly from this view

Gadgets

  • Can now exist on your desktop even without the sidebar being present

Miscellaneous

  • Can share desktop themes with other users
  • User has full control of what icons appear in system tray
  • New Action Center view is central place for reporting on PC’s performance and health characteristics

Multi-touch capabilities

  • Even apps that are not touch-aware can leverage basic gestures (e.g. scrolling/zooming).  Standard mouse behaviors are automatically mapped to equivalent gestures
  • Internet Explorer has been made touch-aware, for additional functionality:
    • On-screen keyboard
    • Navigate to hyperlink by touching it
    • Back/Forward with flick gesture

Applet updates

  • Wordpad gets Ribbon UI
  • MS Paint gets Ribbon UI
  • New calculator applet with separate Scientific / Programmer / Statistics modes

Sinofsky Redux

Sinofsky returned to touch on a few more points for Windows 7:

  • Connecting to Live Services
  • Vista “lessons learned”
  • How developers will view Windows 7

Steve talked briefly about how Windows 7 will more seamlessly allow users to connect to “Live Essentials”, extending their desktop experience to the cloud.  It’s not completely clear what this means.  He mentioned the user choosing their own non-Microsoft services to connect to.  I’m guessing that this is about some of the Windows 7 UI bits being extensible and able to incorporate data from Microsoft Live services.  Third party services could presumably also provide content to Windows 7, assuming that they implemented whatever APIs are required.

The next segment was a fun one—Vista “lessons learned”.  Steve made a funny reference to all of the feedback that Microsoft has gotten on Vista, including a particular TV commercial.  It was meant as a clever joke, but Steve didn’t get that many laughs—likely because it was just too painfully true.

Here are the main lessons learned with Vista.  (I’ve changed the verb tense slightly, so that we can read this as more of a confession).

  • The ecosystem wasn’t ready for us.
    • Ecosystem required lots of work to get to the point where Vista would run on everything
    • 95% of all PCs running today are indeed able to run Vista
    • Windows 7 is based on the same kernel, so we won’t run into this problem again
  • We didn’t adhere to standards
    • He’s talking about IE7 here
    • IE8 addresses that, with full CSS standards compliance
    • They’ve even released their compliance test results to the public
    • Win 7 ships with IE8, so we’re fully standards-compliant, out of the box
  • We broke application compatibility
    • With UAC, applications were forced to support running as a standard user
    • It was painful
    • We had good intentions and Vista is now more secure
    • But we realize that UAC is still horribly annoying
    • Most software now supports running as a standard user
  • We delivered features, rather than solutions to typical user scenarios
    • E.g. Most typical users have no hope of properly setting up a home network
    • Microsoft failed to deliver the “last mile” of required functionality
    • Much better in Windows 7, with things like automatic network configuration

The read-between-the-lines takeaway is we won’t make these same mistakes with Windows 7.  That’s a clever message.  The truth is that these shortcomings have basically already been addressed in Vista SP1.  So because Windows 7 is essentially just the next minor rev of Vista, it inherits the same solutions.

But there is one shortcoming with Vista that Sinofsky failed to mention—branding.  Vista is still called “Vista” and the damage is already done.  There are users out there who will never upgrade to Vista, no matter what marketing messages we throw at them.  For these users, we have Windows 7—a shiny new brand to slap on top of Vista, which is in fact a stable platform.

This is a completely reasonable tactic.  Vista basically works great—the only remaining problem is the perception of its having not hit the mark.  And Microsoft’s goal is to create the perception that Windows 7 is everything that Vista was not.

Enough ranting.  On to Sinofsky’s list of things that Windows 7 provides for Windows developers:

  • The ribbon UI
    • The new Office ribbon UI element has proved itself in the various Office apps.  So it’s time to offer it up to developers as a standard control
    • The ribbon UI will also gradually migrate to other Windows/Microsoft applications
    • In Windows 7, we now get the ribbon in Wordpad and Paint.  (I’m also suspecting that they are now WPF applications)

  • Jump lists
    • These are new context menus built into the taskbar that applications can hook into
    • E.g. For “most recently used” file lists
  • Libraries
    • Apps can make use of new Libraries concept, loading files from libraries rather than folders
  • Multi-touch, Ink, Speech
    • Apps can leverage new input mechanisms
    • These mechanisms just augment the user experience
    • New/unique hardware allows for some amazing experiences
  • DirectX family
    • API around powerful graphics hardware
    • Windows 7 extends the DirectX APIs

Next, Steven moved on to talk about basic fundamentals that have been improved in Windows 7:

Decrease

  • Memory — kernel has smaller memory footprint
  • Disk I/O — reduced registry reads and use of indexer
  • Power  — DVD playback cheaper, ditto for timers

Increase

  • Speed  — quicker boot time, device-ready time
  • Responsiveness  — worked hard to ensure Start Menu always very responsive
  • Scale  — can scale out to 256 processors

Yes, you read that correctly—256 processors!  Hints of things to come over the next few years on the hardware side.  Imagine how slow your single-threaded app will appear to run when running on a 256-core machine!

Sinofsky at this point ratcheted up and went into a sort of but wait, there’s more mode that would put Ron Popeil to shame.  Here are some other nuggets of goodness in Windows 7:

  • Bitlocker encryption for memory sticks
    • No more worries when you lose these
  • Natively mount/managed Virtual Hard Drives
    • Create VHDs from within Windows
    • Boot from VHDs
  • DPI
    • Easier to set DPI and work with it
    • Easier to manage multiple monitors
  • Accessibility
    • Built-in magnifier with key shortcuts
  • Connecting to an external projector in Alt-Tab fashion
    • Could possibly be the single most important reason for upgrading to Win 7
  • Remote Desktop can now access multiple monitors
  • Can move Taskbar all over the place
  • Can customize the shutdown button  (cheers)
  • Action Center allows turning off annoying messages from various subsystems
  • New slider that allows user to tweak the “annoying-ness” of UAC (more cheers)

As a final note, Sinofsky mentioned that as developers, we had damn well all be developing for 64-bit platforms.  Windows 7 is likely to ship a good percentage of new boxes on x64.  (His language wasn’t this strong, but that was the message).

Scott Guthrie

As wilted as we all were with the flurry of Windows 7 takeaways, we were only about half done.  Scott Guthrie, VP, Developer Division at Microsoft, came on stage to talk about development tools.

He started by pointing out that you can target Windows 7 features from both managed (.NET) and native (Win32) applications.  Even C++/MFC are being updated to support some of the new features in Windows 7.

Scott talked briefly about the .NET Framework 3.5 SP1, which has already released:

  • Streamlined setup experience
  • Improved startup times for managed apps  (up to 40% improvement to cold startup times)
  • Graphics improvements, better performance
  • DirectX interop
  • More controls
  • 3.5 SP1 built into Windows 7

Scott then demoed taking an existing WPF application and adding support for Windows 7 features:

  • He added a ribbon at the top of the app
  • Add JumpList support for MRU lists in the Windows taskbar
  • Added Multi-touch support

Scott announced a new WPF toolkit being released this week that includes:

  • DatePicker, DataGrid, Calendar controls
  • Visual State Manager support (like Silverlight 2)
  • Ribbon control  (CTP for now)

Scott talked about some of the basics coming in .NET 4 (coming sometime in 2009?):

  • Different versions of .NET CLR running SxS in the same process
  • Easier managed/native interop
  • Support for dynamic languages
  • Extensibility Component Model (MEF)

At this point, Scott also starts dabbling in the but wait, there’s more world, as he demoed Visual Studio 2010:

  • Much better design-time support for WPF
  • Visual Studio itself now rewritten in WPF
  • Multi-monitor support
  • More re-factoring support
  • Better support for Test Driven Development workflow
  • Can easily create plugins using MEF

Whew.  Now he got to the truly sexy part—probably the section of the keynote that got the biggest reaction out of the developer crowd.  Scott showed off a little “third party” Visual Studio plug-in that pretty-formatted XML comments (e.g. function headers) as little graphical WPF widgets.  Even better, the function headers, now graphically styled, also contained hot links right into a local bug database.  Big cheers.

Sean’s prediction—this will lead to a new ecosystem for Visual Studio plugins and interfaces to other tools.

Another important takeaway—MEF, the new extensibility framework, isn’t just for Visual Studio.  You can also use MEF to extend your own applications, creating your own framework.

Tesco.com Demo of Rich WPF Client Application

Here we got our obligatory partner demo, as a guy from Tesco.com showed off their snazzy application that allowed users to order groceries.  Lots of 2D and 3D graphical effects—one of the more compelling WPF apps that I’ve seen demoed.

Scott Redux

Scott came back out to talk a bit about new and future offerings on the web development side of things.

Here are some of the ASP.NET improvements that were delivered with .NET 3.5 SP1:

  • Dynamic Data
  • REST support
  • MVC (Model-View-Controller framework)
  • AJAX / jQuery  (with jQuery intellisense in Visual Studio 2008)

ASP.NET 4 will include:

  • Web Forms improvements
  • MVC improvements
  • AJAX improvements
  • Richer CSS support
  • Distributed caching

Additionally, Visual Studio 2010 will include better support for web development:

  • Code-focused improvements  (??)
  • Better JavaScript / AJAX tooling
  • Design View CSS2 support
  • Improved publishing and deployment

Scott then switched gears to talk about new and future offerings for Silverlight.

Silverlight 2 was just RTM’d two weeks ago.  Additionally, Scott presented two very interesting statistics:

  • Silverlight 1 is now present on 25% of all Internet-connected machines
  • Silverlight 2 has been downloaded to 100 million machines

IIS will inherit the adaptive (smooth) media streaming that was developed for the NBC Olympics web site.  This is available today.

A new Silverlight toolkit is being released today, including:

  • Charting controls, TreeView, DockPanel, WrapPanel, ViewBox, Expander, NumericUpDown, AutoComplete et al
  • Source code will also be made available

Visual Studio 2010 will ship with a Silverlight 2 designer, based on the existing WPF designer.

We should also expect a major release of Silverlight next year, including things like:

  • H264 media support
  • Running Silverlight applications outside of the browser
  • Richer graphics support
  • Richer data-binding support

Whew.  Take a breath..

David Treadwell – Live Services

While we were all still reeling from Scott Gu’s segment, David Treadweall, Corporate VP, Live Platform Services at Microsoft, came out to talk about Live Services.

The Live Services offerings are basically a set of services that allow applications to interface with the various Windows Live properties.

The key components of Live Services are:

  • Identity – Live ID and federated identity
  • Directory – access to social graph through a Contacts API
  • Communication & Presence – add Live Messenger support directly to your web site
  • Search & Geo-spatial – including mashups on your web sites

The Live Services are all made available via standards-based protocols.  This means that you can invoke them from not only the .NET development world, but also from other development stacks.

David talked a lot about Live Mesh, a key component of Live Services:

  • Allows applications to bridge Users / Devices / Applications
  • Data synchronization is a core concept

Applications access the various Live Services through a new Live Framework:

  • Set of APIs that allow apps to get at Live Services
  • Akin to CLR in desktop environment
  • Live Framework available from PC / Web / Phone applications
  • Open protocol, based on REST, callable from anything

Ori Amiga Demo

Ori Amiga came out to do a quick demonstration of how to “Meshify” an existing application.

The basic idea of Mesh is that it allows applications to synchronize data across all of a user’s devices.  But importantly, this means—for users who have already signed up for Live Mesh.

Live Mesh supports storing the user’s data “in the cloud”, in addition to on the various devices.  But this isn’t required.  Applications could use Mesh merely as a transport mechanism between instances of the app on various devices.

Takeshi Numoto – The Closer

Finally, Takeshi Numoto, GM, Office Client at Microsoft, came out to talk about Office 14.

Office 14 will deliver Office Web Applications—lightweight versions of the various Office applications that run in a browser.  Presumably they can also store all of their data in the cloud.

Takeshi then did a demo that focused a bit more on the collaboration features of Office 14 than on the ability to run apps in the browser.  (Running in the browser just works and the GUI looks just like the rich desktop-based GUI).

Takeshi showed off some pretty impressive features and use cases:

  • Two users editing the same document at the same time, both able to write to it
  • As users change pieces of the document, little graphical widget shows up on the other user’s screen, showing what piece the first user is currently changing.  All updated automatically, in real-time
  • Changes are pushed out immediately to other users who are viewing/editing the same document
  • This works in Word, Excel, and OneNote  (at least these apps were demoed)
  • Can publish data out to data stores in Live Services

Ray’s Wrapup

Ray Ozzie came back out to wrap everything up.  He pointed out that everything we’d seen today was real.  He also pointed out that some of these technologies were more “nascent” than others.  In other words—no complaints if some bits don’t work perfectly.  It seemed an odd note to end on.

Session – A Lap Around Windows Azure, part 1

PDC 2008, Day #1, Session #1, 1 hr 15 min.

Manuvir Das

In my first PDC session, Manuvir Das gave us some more information about the Azure platform.

He started talking about the platform for traditional desktop clients as being the operating system.  It provides a variety of services that all applications need, to run on the platform.

Windows Azure serves as a similar platform for web-based applications.  In other words, Azure is the operating system for the cloud.  It provides the “glue” required for cloud-based services.

According to Manuvir, the four main features of Azure as a platform are:

  • Automated service management
  • Power service hosting environment
  • Scalable, available cloud storage
  • Rich, familiar developer experience

Manuvir demonstrated how an Azure-based service is defined.  The developer describes the service topology and health constraints.  And Azure automatically deploys the service in the cloud.  This means that your service runs on Microsoft’s servers and is scaled out (run on more than one server) as necessary.

As a sample topology, Manuvir showed a service application consisting of a web-facing “role” and a worker “role”.  The web-facing part of the application accepted requests from the web UI and the worker role processed the requests.  When the author configures the service, he specifies the desired number of instances for each of these roles.  Azure seamlessly creates a separate VM for each instance of each role.

A developer can create and test all of this infrastructure entirely on a local desktop by using the Azure SDK.  The SDK simulates the cloud environment on the developer’s PC.  This is valuable for uncovering concurrency and other issues prior to deployment.

Manuvir also talked about Azure providing scalable and available cloud-based storage.  The data elements used for storage are things like Blobs, Tables, and Queues.  Future revs of the platform will support other data objects, including File Streams, Caches, and Locks.  Note that these data structures are simpler than a true relational data source.  If a relational data store is required, the service should use something like SQL Services—Microsoft’s “SQL Server in the cloud” solution.

Rollout

Windows Azure goes “live” this week, for a limited number of users.  It will likely release as a commercial service sometime in 2009.

The business model that Azure will follow will be some sort of pay-for-use model.  Manuvir said that it would be competitive with other similar services.

At some point, Azure will better support geo-distribution of Azure-based services.  It’s also expected to eventually support the ability to run native (x86) code on the virtual servers.

Demo

The obligatory demo by a partner featured Danny Kim, from Full Armor.  Danny described an application that they had created for the government of Ethiopia, that allowed them to track thousands of teachers throughout the country and push out data/documents to the teachers’ laptops.

This demo was again a bit more interesting as a RIA demo than a demo of Azure’s capabilities.  The only Azure-related portion of the demo involved Danny explaining that they were currently supporting 150 teachers, but would soon scale up to several hundred thousand teachers.  But they hadn’t yet scaled out, so Danny’s application didn’t really highlight Azure’s capabilities.