PDC 2008, Day #2, Session #2, 1 hr 15 mins
Ok, I’m finally starting to understand the very basics of Astoria (now branded ADO.NET Data Services), as well as how it relates to the Entity Data Model and SQL Data Services.
There have been a lot of new data-related services and features coming out of Microsoft lately. Astoria, SQL Data Services, the Entity Data Model, dynamic data in ASP.NET, etc. Without understanding what each of them does, it’s hard to see how they all fit together, or when you’d use the different services.
Mike Flasko started this session by talking at the “data services landscape”. (He had a nice picture that I’d include here if I had a copy). There are a few points to be made about the various data services:
- A wide spectrum of data services available
- Some you create yourself, some 3rd party
- Some on-premises, some hosted/cloud
- Characteristics of Microsoft-supplied services
- Simple REST interface
- Uniform interface (URIs, HTTP, AtomPub) — i.e. underlying data is different, but you access it in the same way, no matter the service used
(By the way, the new buzzword at PDC 2008 is “on premises” data and services—to be contrasted with data and services that reside “in the cloud”).
ADO.NET Data Services fits in as follows—it’s a protocol for serving up data from different underlying sources using feeds and a REST-ful interface.
The basic idea behind REST and feeds is that this is a simple HTTP-based protocol—which means that any tool running on any platform or development stack can make use of it. For example, because we use a simple HTTP GET request to read data, any browser or software that knows how to access the web can read our data.
REST is basically a way to piggyback classic CRUD data operations on top of existing HTTP verbs. Here’s how they map:
- HTTP POST – Create a data item(s?)
- HTTP GET – Read one or more data items
- HTTP PUT – Update a data item(s?)
- HTTP DELETE – Delete a data item(s?)
As an example of how you might use ADO.NET Data Services to provide a uniform API to multiple data sources, Mike talked about an application reading/writing data from several different locations:
- From an on-premises SQL Server database
- From SQL Data Services
In both cases, you access the data through the ADO.NET Data Services framework, instead of going to SQL Server or the SQL Data service directly. You can easily add an ADO.NET Data Services layer that sits on top of an on-premises SQL Server databases. And Mike said that SQL Data Services will support the ADO.NET Data Services conventions.
Mike did a nice demo showing exactly how you might consume on-premises SQL Server data through an ADO.NET Data Services API. He started by creating a data model using the Entity Data Model framework. The Entity Data Model is basically just an object/relational mapper that allows you to build a logical data model on top of your physical database. Once you’ve done this, you just create a service that wires up to the Entity Data Model and exposes its data through ADO.NET Data Services (i.e. a REST-ful interface).
Mike then walked through the actual code for accessing the service that he created. At this point, you can do everything (Create/Read/Update/Delete) using HTTP POST/GET/PUT/DELETE and simple ASCII URIs. For example, you can read one or more data elements by just entering the correct URI in your browser. (Because the browser does an HTTP GET by default).
But… This does not mean that your application has to traffic exclusively in an HTTP-based API. It would be ugly to have your .NET managed code doing all of its data access by building up URI strings. Instead, there is a standard .NET namespace built on top of the REST-based interface that you’d use if you were writing managed code.
This layering of APIs, REST and .NET, might seem crazy. But the point is that everything that goes across the wire is REST. Microsoft has done this because it’s an open standard and opens up their services to 3rd parties. It also easily bypasses problems with firewalls. Finally, they provide an object model on top of the URI goo so that .NET applications can work with managed objects and avoid all of the URI stuff.
Mike also showed how a Silverlight 2 application might make use of ADO.NET Data Services to get its data. Again, using ADO.NET Data Services gives us a lot of flexibility in how we architect the solution. In Mike’s case, he had the middle tier (app server) store some data in a local SQL Server and some data up in the cloud using SQL Data Services.
Mike did an excellent job at showing how ADO.NET Data Services fits into the data services landscape at Microsoft. I’m also glad that he brought the Entity Data Model into his examples, so that we could see where it fits in.
Now I’m just wondering if Mike and his team curse the marketing department every time they have to type “ADO.NET Data Services” rather than “Astoria”.