TechEd NA 2014 – Announcing Hybrid Connections

TechEd North America 2014, Houston
Announcing Hybrid Connections: Building Amazing Hybrid Web Sites and Mobile Apps in Minutes – Santosh Chandwani

Day 4, 15 May 2014, 1:00PM-2:15PM (DEV-B307)

Disclaimer: This post contains my own thoughts and notes based on attending TechEd North America 2014 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of either Microsoft, the presenters or the speakers.

Executive Summary—Sean’s takeaways

  • Hybrid connection is a simple way to access on-premise resource from Azure
    • When you don’t want to do something more complex, like VPN / ExpressRoute
  • Connection Manager allows connecting to an on-premise TCP or HTTP service
  • Can connect to Azure web site or Mobile Service

Full video

Santosh Chandwani, Senior Program Manager, Azure, Microsoft

Evolving Enterprise Infrastructure

  • Traditionally, have put everything on a corporate network
  • Azure also has its own network
    • Makes sense to move stuff into the cloud
  • But common to want to keep some critical data on-premise
  • One way to connect these networks
    • VPN, ExpressRoutes
    • Some limits
  • But sometimes you just a simple connection to an asset running on-premise
    • Simple


  • Reinforce ability to do hybrid applications on Azure
  • Extend hybrid capabilities to all Azure services (e.g. PaaS)
  • Don’t want custom code or infrastructure on-premise
  • Secure access without changing network configuration
  • Enterprise admins continue to have control and visibility

Introducing Hybrid Connections

  • Feature of Azure BizTalk Services
    • But don’t require using all of BizTalk
  • Fast, easy way to build Hybrid Apps in preview
  • Connect Mobile Services to on-premises resources

BizTalk Services FREE Edition (Preview)

  • Preview this week
  • Use Hybrid Connections at no charge
  • Hybrid Connections and Data Transfer now included w/all BizTalk Services tiers

Key Features

  • Access to on-premises resources
    • SQL Server, or resources that use TCP or HTTP
  • Works with most frameworks
    • Support for .NET, .NET for Mobile Services
    • No mention of Web API
  • No need to alter network perimeter
    • No VPN gateway or firewall changes to allow incoming traffic
    • Applications have access only to the resource that they require
  • Maintains IT control over resources
    • Group Policy controls, so enterprise admins can control

Hybrid Connections

  • Hybrid Connection Manager
    • Can discover resources on premise
  • From Web Sites or Mobile Services

Demonstration – Web sites

  • Shows web site talking to SQL Server, both on corporate network
  • Then publish web site up to Azure
  • Talking to SQL Azure database
  • Now, set up hybrid connection
    • From Azure Portal, add
    • Name
    • Hostname – on local network, also port name
    • Hostname—could it be IP?
    • Create or use BizTalk Service
  • At this point, it’s just existing on Azure—doesn’t actually connect to anything
  • Set it up from web site, so it knows that web site wants to connect to it
  • Then Remote into desktop
  • IP address could be any device
  • Manager must run on Windows
  • Listener Setup (thru portal)
    • Connected through the portal’s same account
    • They could also do manual setup, with MSI & connection string
  • Where in connection manager did we specify IP address to expose?
    Or was it because we installed it directly on the node that we want to connect to?
  • Now change conn string on web site
    • Replace connection string
  • Refresh web site, now talking to SQL Server on-premises

Lift and Shift

  • Lift web site up into Azure
  • Shift connection to point back to on-premise database
  • No code changes


  • Identify application by host name and port
  • Gave hostname and port to hybrid connection
    • Note: name could only be resolved on corporate network
  • Hybrid Connection Manager
    • Has gone through all security and threat models from Microsoft
  • Arrow directions—how is connection initiated
    • HCM initiates connection to both local resource and up to cloud
  • HCM pushes data
  • Once we spin up hybrid connection, we can use it from multiple services

Demo – Mobile Services

  • Mobile Services – .NET back-end
  • Can now launch and debug Mobile Service locally
  • Creating hybrid connection for Mobile Service from BizTalk Services portal
  • From Mobile Services part of portal, then pick existing Hybrid Connection (and BizTalk service)
  • Then set conn string to point to local database
  • Change code in Mobile Service to use the new connection string
  • Now running local app that goes to Mobile Server to get data
    • Mobile Service in turn is connected to hybrid connection
  • Remote to PC and install hybrid connection manager


  • Supports resources using TCP and HTTP for connectivity
    • Only static TCP ports
    • Need to know ahead of time what the port is
    • Also static IP address, presumably ?
    • Maybe dynamic ports in the future
  • Hybrid Connections don’t buffer or inspect traffic
    • TLS can be negotiated end-to-end between application and on-premises resource
    • Dynamic port redirection, e.g. FTP passive mode – doesn’t work (not supported)


  • Uses Shared Access Signature Authorization
    • Secure, Simple, Familiar
  • Separate roles for on-premises connector and application
  • Application authorization is independent
    • Between web site and on-premise resource


  • Max 5 connections to start with
  • On-premise setup
    • Link to download MSI will be available soon
    • Can use Powershell and MSI to create connection
  • When you get the on-premise installer, the set of connection strings for the connection
  • Mobile Services not yet in new Azure portal

Resiliency & Scale

  • On-Premises Resources can be scaled out as usual
    • Clustering, availability for SQL
  • Applications can be scaled out on Azure
    • Each instance of Website or Mobile Service will connect to Hybrid Connection
    • Don’t have to do anything special
  • Multiple instances of Hybrid Connection Manager supported
    • But going to same IP address
    • Gives us scale

Enterprise IT in control

  • Manage resource access for Hybrid applications
    • Group Policy controls for allowing access
    • Admins can designate resources to which Hybrid Applications have access
  • Event and Audit Logging
    • IT has insight into resources being accessed
    • IT can use existing infrastructure investments for monitoring and control
  • Dashboard on Azure portal
    • Access to connection health, status
    • Will provide insights on usage and metrics (future)



  • Fastest way to build hybrid applications
  • List and Shift web workloads to Azure web sites whilst connecting to on-premises data
  • On-premises data just clicks away from Azure Websites & Mobile Services

TechEd NA 2014 – Microsoft Azure Resource Manager

TechEd North America 2014, Houston
Microsoft Azure Resource Manager – Kevin Lam

Day 4, 15 May 2014, 2:45PM-4:00PM (DEV-B224)

Disclaimer: This post contains my own thoughts and notes based on attending TechEd North America 2014 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of either Microsoft, the presenters or the speakers.

Executive Summary—Sean’s takeaways

  • Now group all resources in Azure portal into resource groups
  • Resource group should be based on whatever grouping makes sense to you
  • Can use Powershell and REST-based API to create/destroy resources
  • Resource templates can be used or modified
    • Start with templates used to create resources listed in gallery

Full video

Kevin Lam, Principal Program Manager, Azure, Microsoft

Today’s Challenge

  • Tough to
    • Deploy or update group of resources
    • Manage permissions on group of resources
    • Visualize group of resources in logical view

We have various Singletons

  • E.g. SQL Database, web site, etc.
  • Deploy is harder
  • Proper use of resources is more abstract
  • Isolation makes communication a challenge

Resource Centric Views

  • (old portal)
  • Big flat list, long list at left

Introducing Resource Manager

  • Application Lifecycle Container
  • Declarative solution for Deployment and Configuration
  • Consistent Management Layer

Resource Groups

  • Tightly coupled containers of a collection of resources
  • Can be same type or different
  • Every resource lives in exactly one resource group
  • Resource groups can span regions
    • E.g. web site located in multiple regions

Coupling for Resources

  • Resource Group is a unit of management
    • Lifecycle – deployment, update, delete, status
    • Grouping: metering, billing, quota – applied, rolled up to entire resource group

Q: Link resource groups together?

  • Not yet. But soon. Will be different ways to view resources

Resource Group Lifecycle

  • How do you decide where to put resources?
  • Hint – do they have common lifecycle and management?
  • Answer – up to you

Power of Repeatability

  • Azure templates can
    • Ensure idempotency –
    • Simplify orchestration
    • Provide cross-resource configuration and update support
  • Azure templates are:
    • Source file, checked-in
    • JSON file
    • Specifies resources and dependencies and connections
    • Parameterized input/output
  • Template drives execution list, based on dependencies

Add Your Own Power

  • Some resources can be extended by allowing more code or data inside resource
    • E.g. AV agent inside VM
    • WordPress web-deploy package on a Website
  • Allow for Scripting or Imperative configuration of resources
  • Extensible solution (Windows/Linux)
    • Chef, Puppet, etc.

Consistent Management Layer

  • Service Management API
    • Portal uses same API that is available to devs
  • Routing to appropriate resource provider
    • Resource providers adhere to resource provider contract

What Does This All Mean?

  • Application Lifecycle Container
    • Deploy/manage application as you see fit
  • Declarative solution for Deployment and Configuration
    • Single-click deployment of multiple instantiations of your application
  • Consistent Management Layer
    • Same experience of deployment and management, from wherever

Demo – Implementation

  • Postman hooked up to REST API
  • Hit API, get JSON data back
  • Examples
    • Get resource groups
    • PUT SQL Server (create)
  • Creating SQL Server node takes a bit of time
  • Then PUT SQL Database on that server
  • Anything that you PUT, you can also GET
  • Can get list of resources
    • Then index by number

Demo – New Portal

  • Gallery – curated list of resources that you can create
    • Microsoft and 3rd party
    • All creation is done by templates
    • You can get these templates and pull them down (PowerShell)
    • Can modify and use these default templates

New JSON editor in Visual Studio

  • With intellisense

TechEd NA 2014 – Public Cloud Security

TechEd North America 2014, Houston
Public Cloud Security: Surviving in a Hostile Multitenant Environment – Mark Russinovich

Day 3, 14 May 2014, 3:15PM-4:30PM (DCIM-B306)

Disclaimer: This post contains my own thoughts and notes based on attending TechEd North America 2014 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of either Microsoft, the presenters or the speakers.

Executive Summary—Sean’s takeaways

  • To move to cloud, customers must trust us
  • Need to follow best practices to make things secure
    • At least as good as what your customers are doing
  • Makes sense to look at top threats and think about mitigating risk in each case
  • Azure does a lot of work to mitigate risk in many areas
    • Often far more than you’d do in your own organization
  • Top three threats
    • Data breach
    • Data loss
    • Account or service hijacking
  • Encryption at rest not a panacea

Full video

Mark Russinovich – Technical Fellow, Azure, Microsoft

“There is no cloud without trust”

  • Security, Availability, Reliability

Misconceptions about what it means to be secure in cloud

  • Will dispel some of the myths
  • Look at what’s behind some of the risks
  • Mitigation of risks

The Third Computing Era

  • 1st – Mainframes
  • 2nd – PCs and Servers
  • 3rd – Cloud + Mobile
  • (Lack of) Security could ruin everything


  • Study after study, CIOs say looking at cloud, but worried about security
  • Other concerns
    • Security
    • Compliance
    • Loss of control

Goals of this Session

  • Identify threats
  • Discuss risk
  • Mitigate

Cloud Architecture

  • Canonical reference architecture
  • Virtualized structure
  • Datacenter facility
  • Microsoft—deployment people and DevOps
  • Customers of cloud—Enterprise, Consumer
  • Attacker

Cloud Security Alliance

  • Microsoft is a member

The Cloud Security Alliance “Notorious Nine” (what are threats to data in cloud?)

  • Periodically surveys industry
  • 2010 – Seven top threats
  • 2013 – Nine top threats
  • Mark adds 10th threat

#10 – Shared Technology Issues: Exposed Software

  • Shared code defines surface area exposed to customers
    • In public cloud, servers are homogeneous—exact same firmware
    • Hypervisor
    • Web server
    • API support libraries
  • What if there’s a vulnerability?
  • Stability and security are balanced against each other
    • Patching might bring down servers
  • Assumes infrastructure is accessible only by trusted actors
  • Corporate and legal mechanisms for dealing with attackers
  • This is: Enterprise Multi-tenancy

#10 – Shared Technology Issues: The Cloud Risk

  • A vulnerability in publically accessible software enables attached to puncture the cloud
    • Exposes data of other customers
    • Single incident—catastrophic loss of customer confidence
    • Potential attackers are anonymous and in diverse jurisdictions
  • “Are you doing as good a job as I’d be doing if I had the data in the house”?
  • Important (vs. Critical) – data not at risk, but confidence in Azure is critical
    • “Cloud critical”
  • “Hostile Multi-tenancy”
  • We do whatever it takes to patch immediately

#10 – Shared Technology Issues: Bottom Line

  • Enterprises and clouds exposed to this risk
  • Clouds at higher risk
    • Data from lots of customers
    • API surface is easy to get to
  • Clouds are generally better at response
    • Azure has about 1,000,000 servers
    • Can do critical patch in just a couple hours, all servers
    • Breach detection/mitigation
  • Risk matrix
    • Perceived risk—bit below average
    • Actual risk – Fairly high (Mark’s assessment)

#9 – Insufficient Due Diligence

  • Moving to cloud, but side-stepping IT processes
    • Shadow IT
    • BYOIT – Bring your own IT—non-IT going to cloud
    • IT management, etc. are designed for on-premises servers
  • Bottom line
    • IT must lead responsible action

#9 – Insufficient Due Diligence – Azure

  • Azure API Discovery
    • Monitors access to cloud from each device
  • SDL
  • Cloud SDL (under development)

#8 – Abuse of Cloud Services

  • Agility and scale of cloud is attractive to users
  • Use of Compute as malware platform
  • Use of storage to store and distributes illegal content
  • Use of compute to mine digital currency
    • VMs shut down per month, due to illegal activity: 50,000-70,000
    • Bulk of it is for generating crypto currency
    • Top 3 countries that are doing this: Russia, Nigeria, Vietnam
    • Password for Vietnamese pirate: mauth123 (password123)
    • Harvard supercomputer was mining bitcoin

#8 – Abuse of Cloud Services: It’s Happening

  • Attackers can use cloud and remain anonymous
  • Bottom line
    • Mostly cloud provider problem
    • Hurts bottom line, drives up prices
  • Using machine learning to learn how attackers are working

#7 – Malicious Insiders

  • Many cloud service provider employees have access to cloud
  • Malicious check-in, immediately rolls out to everybody
  • Operators that deploy code
  • Datacenter operations personnel
  • Mitigations
    • Employee background checks
    • Limited as-needed access to production
      • No standing admin privileges
    • Controlled/monitored access to production services
  • Bottom line
    • Real risk is better understood by third-party audits

Compliance is #1 concern for companies already doing stuff in cloud

#7 – Malicious Insiders – Compliance

#6 – Denial of Service

  • Public cloud is public
  • Amazon was at one point brought down by DDOS
  • Your own app could get DDOS’d
  • Cloud outage – a form of DDOS
  • Redundant power from two different locations to each data center
  • Blipping power to data center results in major outage—several hours
  • Mitigations
    • Cloud providers invest heavily in DDOS prevention
    • Third party appliances that detect and divert traffic
    • We do this for our clients too
    • Large-scale DDOS, doesn’t catch smaller things
  • Geo-available cloud providers can provide resiliency
  • Azure
    • DDOS prevention
    • Geo-regions for failover

#5 – Insecure Interfaces and APIs

  • Cloud is new and rapidly evolving, so lots of new API surface
  • CSA – one of the biggest risks
  • Examples
    • Weak TLS crypto – DiagnosticMonitor.AllowInsecure….
    • Incomplete verification of encrypted content
  • Bottom line
    • Cloud providers must follow SDL
    • Customers should validate API behavior

#4 – Account or Service Traffic Hijacking

  • Account hijacking: unauthorized access to an account
  • Possible vectors
    • Weak passwords
    • Stolen passwords (e.g. Target breach)
    • Then you find that people use same password everywhere; so attacker can use on other services
  • Not specific to cloud
    • Cloud use may result in unmanaged credentials
    • Developers are provisioning apps, hard-coding passwords, publishing them
    • Lockboxes, “secret stores”
    • Back door—someone in DevOps gets phished, then brute force
  • Mitigations
    • Turned off unneeded endpoints
    • Strong passwords
    • Multifactor authentication
      • Entire world moving to multifactor
    • Breach detection
  • Azure
    • Anti-malware
    • IP ACLs (with static IP addresses)
    • Point-to-Site, Site-to-Site, ExpressRoute
    • Azure Active Directory MFA

#3 – Data Loss

  • Ways to lose data
    • Customer accidentally deletes data
    • Attacker deletes or modifies it
    • Cloud provider accidentally deletes or modifies it
    • Natural disaster
  • Mitigations
    • Customer: do point-in-time backups
    • Customer: geo-redundant storage
    • Cloud provider: deleted resource tombstoning
      • Can’t permanently delete
      • 90 days
  • Azure
    • Globally Replicated Storage
    • VM Capture
    • Storage snapshots
    • Azure Site Replica

#2 – Data Breaches

  • Represents collection of threats
  • Most important asset of company is the data

#2 – Data Breaches: Physical Attacks on Media

  • Threat: Attacker has physical access to data/disk
  • Mitigation: cloud provider physical controls
    • To get in data center, gate with guards
    • To get into room with servers, biometric controls
    • Disk leaving data center—very strict controls
    • Data scrubbing and certificate
    • SSDs never leave data center, because it’s so hard to scrub it
    • HDDs are scrubbed
  • Enhanced mitigations
    • Third-party certifications (e.g. FedRamp)
    • Encryption at rest
  • Azure: third-party encryption

Encryption at rest

  • Two types
    • Cloud provider has keys
    • Customer has keys
  • When you have keys, you’re also giving keys to cloud to decrypt

#2 – Data Breaches: Physical Attacks on Data Transfer

  • Man-in-the-middle
  • Mitigation
    • Encrypt data between data centers
    • APIs use TLS
    • Customer uses TLS
    • Customer encrypts outside of cloud

#2 – Data Breaches: Side-Channel Attacks

  • Threat: Collocated attacker can infer secrets from processor side-effects
  • Snooping on processor that they’re co-located on
  • Researcher assumptions (but unlikely)
    • Attacker knows crypto code customer is using and key strength
    • Attacker can collocate on same server
    • Attacker shares same core as you
    • Customer VM continuously executes crypto code
  • Not very likely
  • Bottom line
    • Not currently a risk, in practice

#2 – Data Breaches: Logical Attack on Storage

  • Threat: attacker gains logical access to data
  • Mitigations
    • Defense-in-depth prevention
    • Monitoring/auditing
  • Encryption-at-rest not a significant mitigation
    • If they can breach logical access, they can maybe get keys too
    • The keys are there in the cloud
    • Encrypt-at-rest isn’t based on real threat modeling

#2 – Data Breaches: Bottom Line

  • Media breach not significant risk
  • Network breach is risk
  • Logical breach is a risk
    • Encrypt-at-rest doesn’t buy much

#1 – Self—Awareness

  • E.g. Skynet
  • People are actually worried about this

TechEd NA 2014 – Data Privacy and Protection in the Cloud

TechEd North America 2014, Houston
Data Privacy and Protection in the Cloud– A.J. Schwab, Jules Cohen, Sarah Fender

Day 2, 13 May 2014, 10:15AM-11:30AM (OFC-B233)

Disclaimer: This post contains my own thoughts and notes based on attending TechEd North America 2014 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of either Microsoft, the presenters or the speakers.

Executive Summary—Sean’s takeaways

  • The issue of Trust is important whenever you talk about moving data to cloud
    • Need to convince users that data will be secure, private
  • Data Privacy is key goal for Microsoft
  • Lots of tools for controlling access to data, e.g. identity management
  • Security at many layers, e.g. physical, network, etc.
    • Microsoft pours lots of resources into security for the layers that they control

Full video

Jules Cohen – Trustworthy Computing group, Microsoft

Three major buckets, when thinking about moving data to the cloud

  • Innovation properties – will cloud let me do what I want?
  • Economics – what is TCO?
  • Trust

First two buckets are relatively un-complicated

  • Trust – harder to evaluate, more visceral
  • Privacy and data protection are part of trust


  • Microsoft has made significant investments
  • If you already trust the cloud, we’re going to improve level of trust

Changing Data Protection concerns to opportunities

  • You already trust people within your organization
  • In cloud world, some of these functions move off premises
  • Ref: Barriers to Cloud Adoption study, ComScore, Sept-2013
    • 60% – security is barrier to cloud adoption
    • 45% – concerned about data protection (privacy)


  • Can’t have privacy without security
  • Security is a pre-req
    • Do the right people have access to the data?
  • Once data is in the right hands, we can talk about privacy
    • Do people who have access to data use it for the right things?

Perceptions after migration to cloud

  • 94% – said they experienced security that they didn’t have on-premise
  • 62% – said privacy protection increased after moving to cloud

Microsoft’s approach to data protection

  • 1 – Design for privacy
    • Corporate privacy policies, disclosures
    • Trustworthy Computing formed in 2002, after memo from Bill Gates—privacy, security, reliability
  • 2 – Built-in features
    • Customers can use these features to protect their data
  • 3 – Protect data in operations
    • Operating services – Microsoft committed to data protection in service operations
    • Microsoft complies with various standards, help customers comply with those standards
  • 4 – Provide transparency and choice

Privacy governance – Program

  • Design for Privacy
  • People – Employee several hundred people focused on privacy
  • Process
    • Internal standards
    • Rules maintained by Trustworthy Computing
  • Technology
    • Use tools to support people and processes
    • Look for vulnerabilities

Privacy government – Commitments

  • Microsoft services meet highest standards in EU (Article 29)
  • First (and only) service provider to get this approval

Sarah Fender – Director of Product Marketing, Windows Azure, Microsoft – Built-in Features

Data Protections in Azure

  • Data location – can choose to run in a single region, or multiple regions
  • Redundancy & Backup
    • 3 copies of data, within region
    • Can also do geo-redundant storage, to different region
    • E.g. Create new storage account, pick region
  • Manage identities and access to cloud applications
    • Centrally manage user accounts in cloud
    • Enable single sign-on across Microsoft online service and other cloud applications
    • Extend/synchronize on-premise to cloud – Active Directory synching to Azure
  • Monitor and protect access to enterprise apps
    • Passwords stored in encrypted hashes
    • Security reporting that tracks inconsistent access patterns – e.g. user accessing service from distant geo-locations
    • Step up to Multi-Factor Authentication – e.g. text message or e-mail with secret code

Data encryption

  • VMs – encrypted disk using BitLocker
  • Can encrypt data at rest
  • Applications – RMS SDK
  • Storage – .NET Crypto, BitLocker (import/export), StorSimple w/AES 256

Data protections in Office 365

  • Encrypt data in motion and also at rest

A.J. Schwab – Senior Privacy Architect, Office 365, Microsoft – Protect Data in Operations

Value proposition of running in cloud

  • Less work—patching, reacting to problems

Defense in depth strategy

  • Physical
    • Who comes into facility?
    • What media goes in/out?
    • If bad guy can stand in front of your computer, it’s not your computer anymore
  • Network
    • Looking for anomalous traffic
    • Packet penetration testing
    • Watching logs
  • Identity & Access Management
    • Internal Microsoft authentication policies for internal staff
    • Know who people are and who gets access from within Microsoft
    • Just-in-time access – when someone wants access to customer information, it’s an exception
  • Host Security
    • Patching, managing OS on host
  • Application
    • Make sure that application is running in secure configuration
  • Data
    • “Data is everything” – data is money
    • Big part of the focus, protesting the data
  • 24x7x365 incident response

Cloud security must be equal or better to on-premise

Protect data in operations

  • Data isolation
    • Very important to customers
    • Only privileged user has access to data
  • Limited Access
    • MFA for service access
    • Auditing of operator access/actions
    • Zero standing permissions in the service
    • Automatic Microsoft staff account deletion
      • To make sure that things follow policies, everything is automated
    • Staff background checks, training
      • Can Microsoft trust the people that it hires?

Approach to Compliance

  • Industry standards and regulations
  • Controls Framework & Predictable audit schedule
  • Certification and Attestations

Customer Stories – Kindred Healthcare

  • Background
    • Big healthcare provider
    • Mobile service, ensure data privacy
  • Solution
    • Office 365 Exchange, SharePoint, Lync
    • Met security and privacy needs

Shared Protection Responsibility

  • IaaS – cloud customer has most of the responsibility
  • SaaS – cloud provider assume many of the responsibilities

Provide transparency and choice

  • Trust Center web page – for Office 365, and for Azure
  • Lots of documentation online


  • 1 – Design for privacy
  • 2 – Built-in features
  • 3 – Protect data in operations
  • 4 – Provide transparency and choice

Questions and Answers

Q: Sharepoint, is data encrypted while data is at rest? Is BitLocker available? Or third-party products?

  • Microsoft has committed to goal of having all data in transit and all data at rest is encrypted
  • By the end of 2014, Sharepoint data at rest will be fully encrypted
  • But law enforcement has generally been satisfied with current security and privacy policies

Q: What tools do you have to assist attorneys?

  • See materials in the Trust Center
  • Microsoft constantly talking to lawyers, to stay on top of current regulations
  • So probably collateral materials that are required are there
  • We do have Controls Framework that maps what Microsoft does and maps it to specific regulatory requirements
  • Thinking about how to package this up and present it for customers

Q: How to evaluate tools based on legal requirements?

  • We (Microsoft) can’t give you (customer) legal advice. But we can show you how tools map to particular requirements
  • Can do this in the context of certain verticals, e.g. Banking

If you have questions, stop by the Security & Compliance station in the Azure booth

TechEd NA 2014 – Microsoft Azure Security and Compliance Overview

TechEd North America 2014, Houston
Microsoft Azure Security and Compliance Overview– Lori Woehler

Day 2, 13 May 2014, 8:30AM-9:45AM (DCIM-B221)

Disclaimer: This post contains my own thoughts and notes based on attending TechEd North America 2014 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of either Microsoft, the presenters or the speakers.

Executive Summary—Sean’s takeaways

  • Microsoft has done a lot of work to support various security standards
    • In some cases, you can use their documents as part of your own demonstration of compliance
  • Data can be more secure in cloud, given the attention payed to security
  • Customer has greater responsibilities for demonstrating compliance when using IaaS (Infrastructure)
    • Fewer responsibilities when using PaaS (Platform)—just application and data
  • Potentially more compliance issues in EU and Asia, or in certain verticals (e.g. Healthcare)
  • Good compliance cheat sheet that lists typical steps to take

Full video

Lori Woehler – Principal Group Program Manager, Microsoft. CISSP, CISA
At Microsoft since 2002
On Azure team for 18 months


  • Understand how Azure security/compliance helps you to meet obligations
  • Define Azure S&C boundaries and responsibilities
  • Info on new resources and approaches

Other sessions

  • B214 Azure Architectural Patterns
  • B387 Data Protection in Microsoft Azure
  • B386 MarkRu on Cloud Computing
  • B306 Public Cloud Security

Track resources

  • Security Best Practices for enveloping Azure Solutions
  • Windows Azure Security Technical Insights
  • Audit Reports, Certifications and Attestations
    • Includes all details related to audits
    • Can just hand off the stack of paper to outside auditors

Other resources

Technology trends: driving cloud adoption

  • 70% of CIOs will embrace cloud-first in 2016
  • Benefits of cloud-first
    • Much faster to deliver solution
    • Scale instantly
    • Cheaper, e.g. $25k in cloud would cost $100k on premises

Cloud innovation

  • Pre-adoption concerns (barriers to adoption)
    • 60% – security is concern
    • 45% – worried about losing control of data
  • Security, Privacy, Compliance

Cloud innovation

  • Benefits realized
    • 94% – new security benefits
    • 62% – privacy protection increased by moving to cloud

Trustworthy foundation timeline

  • 2003 – Trustworthy Computing Initiative
  • Digital Crimes Unit
  • ISO/IEC 27001:2005
  • SOC 1
  • UK G-Cloud Level 2
  • SOC 2
  • CSA Cloud Controls Matrix
  • PCI DSS Level 1

Azure stats

  • 20+ data centers
  • Security Centers of Excellence – combat evolving threats
  • Digital Crimes Unit – legal/technical expertise, disrupt the way cybercriminals operate
    • Info on botnets
    • Bing team publishes blacklist and API to access it
  • Compliance Standards – alphabet soup of standards, audits, certs

Microsoft Azure – Unified platform for modern business

  • Four pillars
    • Compute
    • Data Services
    • App Services
    • Network Services
  • Global Physical Infrastructure

Simplified compliance

  • Information security standards
    • Microsoft interprets, understands
  • Effective controls
    • Map to controls, e.g. SOC 1 type 2, SOC 2 Type 2
    • Evaluate both design and effectiveness of controls
  • Government & industry certifications
    • Ease of audit and oversight

Security compliance strategy

  • Security goals in context of industry requirements
  • Security analytics – detect threats and respond
  • Ensure compliance for high bar of certifications and accreditations
  • Continual monitoring, test, audit

Certifications and Programs

  • Slide shows summary of various certifications
  • ISO/IEC 27001 – broadly accepted outside U.S.
    • Now supporting “Revision 3” under 27001
  • SOC 1, SOC2 – for customers who need financial reporting
    • Five different areas: Security, Privacy, Confidentiality, Integrity, Availability
    • SSAE 16 / ISAE 3402 – accounting standard
  • For IaaS, compliance information is more detailed
  • Increasing focus on government certification and attestation

Contractual commitments

  • EU Data Privacy approval
    • Only Microsoft approved from EU Article 29
  • Broad contractual scope
    • Contractual commitments for HIPAA et al

Shared responsibility

  • Where is customer responsible, vs. Microsoft
  • Customer
    • Manages control of data in PaaS
    • Going with PaaS reduces customer responsibility to just Applications and Data
    • Under PaaS, no customer responsibility for Runtime, Middleware, O/S
  • SaaS – no customer responsibility

PaaS Customers – important things to know

Paas Customer Responsibilities

  • Access Control – define security groups and security set
    • Logs to demonstrate that access is due to customer granted permission
  • Data Protection
    • Geo-location – be careful about setting yourself up for potential non-compliance
      • There are obligations in Europe and Asia
      • You can check for access from outside your geo-location boundaries; then potentially restrict access
    • Data Classification and Handling
      • Deciding what data should go up to the cloud
      • Microsoft has published guides to classifying data (schemas)
      • Cloud Controls Matrix – show where you have programmatic obligation
    • Privacy and Data Regulatory Compliance
  • Logging & Monitoring Access and Data Protection
  • ISMS Programmatic Controls
  • Certifications, Accreditations and Audits
    • Can I just use Microsoft’s audit results are our own? No

IaaS Customer Responsibilities

  • Application Security & SDL (Security Development Lifecycle)
    • Can test outside of protection
    • Role segregation, between operations and development
    • E.g. rely on TFS to show process and evidence
  • Access Control – identity management
    • Start with access control to Azure environment itself
    • Then also access control to guest Oss (or SQL Server)
    • Auditors will focus on timing of provisioning/de-provisioning (e.g. remove user when they leave company)
  • Data Protection
    • Microsoft demonstrates that data in your environment is not exposed to other customers
    • Focuses on HyperV when testing
  • O/S Baselines, Patching, AV, Vulnerability Scanning
    • Standard build image in Azure, patched to most recent security update
    • Customers should adopt standard patching cadence; matching your on-premise infrastructure
    • Configuration and management of SSL settings is responsibility of customer
  • Penetration Testing
  • Logging, Monitoring, Incident Response
    • Microsoft has limited ability to access your logs and VM images
  • ISMS Programmatic Controls
    • Impact of documentation of Standard Operating Procedures—quite cumbersome
    • Can start by taking dependency on Azure, in documents that Microsoft have already generated
    • But this doesn’t go all of the way
  • Certifications, Accreditations & Audits
    • Auditors shouldn’t re-test customers in the areas that Azure already covers
    • Documentation that Azure provides should be enough
    • White papers in trust center describe how to leverage Microsoft stuff

Compliance cheat sheet

  • Identify your obligations/responsibilities
    • E.g. contractual
  • Adopt Standard Control Set
    • List of the rules, ties into policies
  • Establish policies and standards
    • “Your plan is your shield”
    • Criteria against which external auditors will evaluate your environment
    • Don’t try to be too broad, trying to cover every possible audit—auditor will apply their own judgment
    • Set level of detail listing deliverable and schedule for deliverable
    • Then you just demonstrate that you’ve met the policies that you’ve set
  • Document system(s) in scope
    • Challenging, if you haven’t implement an asset inventory mechanism
    • Auditors will want to see all assets—physical and virtual (e.g. user accounts, etc).
    • A significant amount of work
    • Log when systems come online and offline (into or out of production)
  • Develop narratives for each control
    • Written description of how a control executes
    • Ties back to specs for systems
    • Auditors will look at spec and then test plan
  • Test control design & execution
  • Identify exceptions and issues
    • No such thing as perfect
    • Document decisions made
    • “Qualified report” – auditor’s report that says that vendor is only partly compliant
  • Determine risk exposure
    • “Transferring risk” to third party—sometimes reduces your risks, sometimes increases your risks
    • Understand both costs and risks
    • Story of Singapore government, including keystroke loggers and video cameras, plus person observing live data feed (for traders using financial service)
  • Define remediation goals and plans
  • Monitor the system
    • And demonstrate to 3rd party that your controls are behaving as expected
  • Report on compliance status
    • Not just reporting for checklist

More detailed cheat sheet

Most Frequently Asked Questions

  • PCI Compliant? – no
  • Can xyz audit Azure? – no
  • Can we have your pen test reports? –
  • Will you fill out this 500 question survey? –
  • Kicked out of the room at this point

TechEd NA 2014 – The Future of Microsoft Azure DevOps: Building, Deploying, Managing, and Monitoring Your Cloud Applications in the New Azure Portal

TechEd North America 2014, Houston
The Future of Microsoft Azure DevOps: Building, Deploying, Managing, and Monitoring Your Cloud Applications in the New Azure Portal – Chandrika Shankarnarayan, Bradley Millington

Day 1, 12 May 2014, 4:45PM-6:00PM (DCIM-B223)

Disclaimer: This post contains my own thoughts and notes based on attending TechEd North America 2014 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of either Microsoft, the presenters or the speakers.

Executive Summary—Sean’s takeaways

  • Gradually doing migration of old portal features to new portal
  • New portal organization much better, information organized based on what you need to see
  • Tight integration with Visual Studio Online
    • VSO will eventually go away, as it’s subsumed into Azure portal
  • With VSO integration, Azure portal includes lots of ALM stuff
  • Diagnostics on web sites now a bit easier, with streaming logs, etc.

Full video

Chandrika Shankarnarayan – Principal Program Manager Lead, Microsoft


  • Icebreaker –
  • DevOps lifecycle
  • Resource Manager
  • Demos
    • New portal, new concepts
    • Using VSO to manage dev lifecycle
    • Operating a running app

Microsoft Azure

  • IaaS, PaaS
  • IaaS – VMs, Storage, Networking, CDN
  • PaaS – Web, Mobile, Identity, Data, Analystics, Integration, Media, Gaming

DevOps Basics

  • Developers
  • Code Repository
  • Build
  • Test
  • Deploy to Cloud
  • Monitor and Improve

Consistent Management Layer

  • Tools – Azure + Command Line + Visual Studio
  • Consistent Service Management API
  • Resource Manager – compose application of multiple resources, on-premise and in cloud
  • Resource provider contract

What does it all mean?

  • Application Lifecycle Container
    • Deply/manage application as you see fit
  • Declarative solution for Deployment and Configuration
    • Single-click deployment of multiple instantiations of your application
  • Consistent Management Layer
    • Same experience whether from portal, command line, or tools


  • Gallery – one-stop for all Azure services, Microsoft or third-party
  • Website + SQL – composite
    • Resource Group – set of services
    • Website configuration
      • Very nice summary of web hosting plan features
      • Lots more tiers
      • E.g. Standard Small
    • Database configuration
      • Can use existing database
  • Create Team Project
    • Name
    • Git for source code control
    • Account – Visual Studio Online
  • Portal overview
    • Can customize the portal
    • Can change size of various things
    • Can unpin tiles from portal
  • Hubs concept
    • Notifications – all of your operations (including API stuff)
    • Billing – lists your accounts – single place to show pricing stuff
      • Burn rate for the month
      • Cost breakdown, based on resource
      • Manage Admins from Billing
  • Open Team Project
    • Add User for Team Project – autocomplete if user is in AAD tenant

Bradley Millington – Principal Program Manager, Microsoft

Browse Team projects

  • Can pin to dashboard
  • Can find web site and pin to start

Drilling into Team Project

  • Summary lens – overview of resource
  • Quick start – some quick docs on what you can do
  • Properties – basic props
  • Code lens
    • Source repository (TFS or Git)
  • Build lens
    • Build definitions that trigger builds
  • Work lens
    • Backlog and work items
  • Users lens – manage users

Adding code to repository

  • Get clone URL
  • Command line, “git remote add ..” and then “git push ..”
  • Back on portal, code shows up – all the git commit history is still there
    • Commits – properties, diffs, etc.
  • Can browse source code

Continuous deployment between project and the web site

  • Can browse from project and look for web site
  • Deployment slots
    • Add Slot
    • Name
  • Slot is just another web site
  • In web site, “set up continuous deployment”
  • Choose source – e.g. Visual Studio Online
  • Choose project
  • Choose branch

This created a Build definition

  • Every time you check-in code, it builds (and deploys?)
  • Resource map shows master web site and staging deployment
  • Drill into build – can look at log files, build results
  • Same build info that you’d see in Visual Studio

Create work items

  • Can just type items and press RETURN
  • Can set properties on work items


  • Projects backed by Azure Active Directory – so you can really control who gets access
  • When yo uadd user to project, they are added to a VSO account
  • VSO – unlimited number of private projects
  • But 5 free basic users

Scaling of Visual Studio Online account

  • Users, paid services, etc. – through VSO portal

Build definition

  • Can see deployment

Can look at staging site

  • Click Swap to swap it to production
  • “Swapping website slots”

Look at Azure portal from perspective of developer that’s been using it for a while

  • Can click on Build Definition and look at various builds
  • Drill into build that failed – look at details
  • Can see compilation problem, get specific error in solution
  • Can even edit code (errors not yet highlighted)
  • Can then make code changes directly in repository

Or you might see that compilation worked, but tests failed

Visual Studio integration

  • Can open up solution, look at build results
  • Can fix, then queue new build

Chandrika back again

Day 100 in life of Ops person

  • Look at Resource Group
    • Can contain web sites
    • Show costs for this resource group
  • Pick website
    • Website, SQL database, and associated project


  • Graph of HTTP requests
  • Edit query (for graph)
    • Add other metrics, change to “Past Hour”
  • No 404 errors
  • Can set up Alert, track errors, e.g. 404

Client-side analytics

  • Mobile device type
  • Browser types
  • Can look at page load time, by page

Configuration – code snippet

  • Just Javascript snippet that you inject into application

Webtest – can ping application from various geographic locations

  • For a test, look at average response time, failures, etc.

Can also look at server that you’re hosting on

  • CPU percentage for a given day

Can set up Autoscale

  • By default, you have a single metric – e.g. CPU percentage
  • One Key metric to look at, to see if you need to scale, is HTTP Queue Length


  • Can customize metrics and graphs
  • E.g. CPU usage, Memory, and HTTP queue length
  • Easy to customize

Troubleshooting – Events

  • Azure records every operation that you make in portal
  • E.g. Scale Up / Down (e.g. from Autoscale)
  • Can drill into these events and look at specific properties


  • Can set up alert rules for any metrics
  • E.g. CPU Percentage
  • Specify threshold, condition, period
  • Can do E-mail, SMS

Streaming Logs – real-time for troubleshooting

  • Diagnostics logs – can configure
  • Can see stuff hitting site in real-time

Console connection to web site


  • Can’t define in new portal yet – use old portal
  • Sometimes gaps between current portal and new portal

Traffic Manager

  • Support for hybrid connections
  • Can set up in portal
  • Create or use existing
  • Also available for Mobile Services


  • Azure Portal Preview + Resource Manager
    • Enterprise grade
    • Bridges islands of experiences
    • Simple, best in class
    • Consistent
    • Customizable
    • Enable discovery, cross-sell and up sell of services
    • Provide ecosystem for internal and externa partners



Q: Merging Visual Studio Online and Azure Portal?

  • Yes gradually moving stuff to Azure Portal. But no real roadmap for how this will work

Q: On current portal, timeline of preview portal?

  • New portal available in “preview”, not full functionality yet. Will continue to obring more and more services into Preview portal. “Over a period of time”. And data consistency between two portals

Q: Features available in on-premise platform? (Windows Azure Pack)

  • All this work will transition to Azure Pack. But timeframe is TBD

Q: Application insights – can you bring info from on-premises instances?

  • Yes, goal to be agnostic w/respect to cloud/on-prem. So eventually, yes.

Q: Plan for role-based stuff?

  • Resouce Manager integrated with AD, so we’re set up for it. So a possible future ability – define Users/Roles

Q: Analytics – what about perspective for business/marketing people?

  • We have some of those scenarios, e.g. in Office 365. Will definitely bring some of those scenarios into this portal. So yes, the experience would be in this same portal. Also looking at dashboards that you customize for certain roles, “custom dashboards” target at specific role, e.g. Marketing.

Q: Where are the metrics coming from (e.g. 404)

  • Yes, coming directly from web site back-end (Performance Counter)

Q: Continuous deployment options? ??

  • A function of team management features and dashboards targeted at specific groups in team

Q: Additional step to continuously deploy into production?

  • Swap to move to production. More likely, you’d have bigger resource group and you want to move a number of items from staging to production. Maybe pull some features from Visual Studio. Also, you can use API layer in Resource Manager to do something like run scripts that do cloning and deployment. One goal of Resource Management is to reliably replicate some of the expected scenarios.

Q: Integration of package management system, e.g. NuGet?

  • Great idea, but can’t speak to specific plans.

TechEd NA 2014 – Building Highly Available and Scalable Applications in Microsoft Azure

TechEd North America 2014, Houston
Building Highly Available and Scalable Applications in Microsoft Azure – Stephen Malone, Narayan Annamalai

Day 1, 12 May 2014, 1:15PM-2:30PM (DEV-B311)

Disclaimer: This post contains my own thoughts and notes based on attending TechEd North America 2014 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of either Microsoft, the presenters or the speakers.

Executive Summary—Sean’s takeaways

  • Azure Traffic Manager
    • Routes to appropriate region, depending on where user is located
    • Automatic failover, if region goes down
    • Configure things using Powershell
    • Can now include non-Azure endpoints in policy
  • Use PaaS when you can, IaaS when you have to
  • Lots of flexibility when creating/configuring virtual networks

Full video

Stephen Malone – Program Manager, Microsoft

Why Microsoft / Azure

  • Global footprint

Azure Network Stack

  • Network is glue that binds everything together for scale
  • Main building blocks that allow you to build scalable, secure services
  • Layers: Network services, Core SDN etc.


Azure Traffic Manager

  • Intelligent customer routing
  • Load balancing policies (profile types)
    • Performance – Direct user to closest service (based on latency)
    • Round-robin – Distribute equally
    • Failover – Direct to backup service if primary fails (also happens with other policies)

Automated failure detection and re-direction

  • Users hit servers in their own region
  • Service health monitoring
    • If something stops responding
    • Azure Traffic Manager automatically detects and re-routes user to “next best” service


How Azure Traffic Manager work?

  • DNS based global traffic management
  • Traffic Manager profile created with name, routing policy, and health monitoring configuration
  • Your domain name
  • CNAME to
  • Load-balancing, endpoint monitoring
  • Service instances (endpoints) added to Traffic Manager …


How it works

  • DNS look for web site you need
  • Name server for your name indicates that it’s CNAME
  • Hits DNS Server for traffic manager, with Policy Engine
  • Then to Traffic Manager

E.g. If there are three sites

  • Traffic Manager makes use of your particular policy (e.g. pick nearest service)
  • Then site picked and IP returned to client


New support for External Endpoints

  • Now support for non-Azure endpoints for all traffic manager policies
  • Full support for
    • Automated monitoring
    • Failure detection
    • End-user re-direction
  • Include endpoints from different Azure subscriptions in the same policy
  • Add redundancy for your on-premises service using Azure Traffic Manager
    • Great way to try out Azure – as backup
  • Include on-premises endpoints as scale units to achieve greater scale
    • Or as additional geo locations to improve performance for your end users
  • Enables burst to cloud scenarios transparently to end-user
    • Also can auto scale up within a region

Demo – External Endpoints

  • Create a new profile in Powershell
  • Then add endpoint to profile (e.g. U.S.)
  • Different domain name (e.g. or
  • Then show adding an external endpoint

Narayan Annamalai – Senior Program Manager, Azure

  • Will talk about how we can help you to scale

Build to scale

  • Regional Virtual Networks
    • Really picking up now

Virtual Network

  • Logical isolation with control over network
  • Create subnets with your private IP addresses
  • Stable and persistent private IP addresses
  • Bring your own DNS
  • Use Azure-provided DNS
    • VMs can register themselves with this DNS
  • Secure VMs with input endpoint ACLs

Typical multi-tier services

  • Composed of various services
  • They are interconnected, certain services having to talk to certain others


How you use

  • You can pick IP addresses for your VMs, within a virtual subnet that you create

Isolated and connected

  • Internet portal into one public IP
  • Then customer virtual network that acts as a private network
  • “Isolated private channel”
  • Use PaaS when you can; use IaaS when you have to
  • PaaS gives you some additional services (like auto-scaling)
  • All of these services can be part of the same virtual network
    • Brings IaaS and PaaS together

Regional scope

  • VNET spans to an entire region
  • Fully connected private and isolated network across datacenters
  • New services requiring specific SKUs A8, A9 can be added to same VNet –
    • Seamless expansion
  • Previously, VNet had to be within a single scale unit
  • Now, VNet can include multiple SKUs (in terms of scale)

Inter connected VNets

  • VNets can be connected thru secure Azure gateways
  • VNets can be in different subscriptions
  • VNets in same or across regions can be connected

Connecting to Multiple Sites

  • Multiple site-to-site connections
  • Multiple on-premises sites connect to same VNet
  • Sites may be geographically dispersed

Public facing Services

  • Every cloud service given public IP address (VIP) from Azure’s pool of address
  • Virtual machines, Web/Worker roles in cloud service can be accessed thru VIP using endpoints
  • Azure provides load balancing at no charge

Create endpoints

  • E.g. open port 80, run two instances behind it

Public Endpoint Access Control Lists

  • Can whitelist IPs, subnets, etc.
  • Can allow or deny various IPs

Internal load balancing (preview)

  • Load balancing between VMs w/o public facing endpoints
  • Two flavors
    • Private load balancing within cloud service
    • Or within VNet
  • Multi-tier applications with internal facing tiers require load balancing
    • Middle tier, DB backend not exposed to Internet
    • Load-balanced endpoints exposed only to CorpNet
    • Sharepoint, LOB Apps

Scenario – LOB Apps

  • Private, Sharepoint accessible from other VNets

IP reservation

  • Today, you get VIP assigned by Azure
  • When you re-deploy, you get different IP
  • Now, IP reservation
    • Reserve public IP addresses
    • Customers can own IP addresses and assign them to cloud services
    • Reserved IP can be used on any cloud service on the region
    • Current IP address on existing service can be reserved as well
    • Reserved IPs are customers to keep
  • Why do you need it?
    • Your service might talk to an external service that needs to be configured to whitelist your service
    • So your IP needs to remain static so that whitelist still works

Instance level public IPs (Preview)

  • Today, every cloud service gets VIP assigned by Azure
  • You must map from public VIP to port on VM for internal server
  • Now – Instance-level Public IPs
    • Assign public IPs to VMs
    • Direct reachability to VM, no endpoint required
    • Public IP used as the outgoing IP address
    • Enables scenarios like FTP services, external monitoring, etc.

Demo – Create VNet, etc.

  • Powershell very powerful when doing things on Azure
  • Some functions not available yet on portal, must use Powershell
  • Start with: Get-AzurePublishSettingsFile
  • Set-AzureVNetConfig
  • Reserve IP
    • New-AzureReservedIP, define by name
    • Then Get-AzureReservedIP – by name
  • Creating specific web sites (VMs)
  • Create VM on a specific Vmnet

TechEd NA 2014 – Keynote

TechEd North America 2014, Houston
Keynote – Brad Anderson, Josh Twist, Matt McSpirit, Adam Hall, Brian Harry, Richard White, Eron Kelly, Julia White

Day 1, 12 May 2014, 9:00AM-10:30AM

Disclaimer: This post contains my own thoughts and notes based on attending TechEd North America 2014 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of either Microsoft, the presenters or the speakers.

Executive Summary—Sean’s takeaways

  • Mobile-First, Cloud-First
  • Focusing on Mobile Apps + Cloud Services is critical
  • For Cloud, Azure is a great solution
    • Highly scalable
    • Supports lots of hybrid (on premise + cloud) scenarios
    • Good support for analytics
    • Resilient—e.g. automatic failover
  • Mobile Apps
    • Must support iOS and Android
    • Microsoft supports identity and app management on these devices
    • IT control of apps and data
  • Story for Developers
    • Easier and easier to create cross-platform solutions (Visual Studio + Xamarin)
    • Native apps for best experience and for leveraging existing .NET skills
    • Hybrid apps, HTML5 + Cordova, if your devs have HTML5 experience
  • Story for Users
    • Touchy-feely video—things basically getting cooler for users all of the time
  • Office 365
    • Very cool, running on a variety of platforms
    • Multiple users
    • IT management

Full video

Brad Anderson – Corporate Vice President, Windows Server and System Center Program Management

On Satya—no better leader, no better human

Reflecting on past 10 years

Connected devices

  • 2008 – more connected devices than humans
  • Now hyperscale

Growing number of users, growing # of connect devices

  • Lots of data
  • High dependent on cloud

]:05 video – Mobile cloud, video]

Video – mobile cloud – various speakers

  • Costs very little to store all this data
  • Devices can access this data
  • Lots of compute power + lots of connectivity


  • Two loops
    • Device – data
    • Device – human
  • Detecting Parkinsons by detecting micro tremors
  • Guthrie – important to engage customers
  • Data lets you engage and collaborate
  • “Start dreaming” – If you don’t dream, you won’t go anywhere

Mobile-first and cloud-first

  • Must have both
  • Cloud with devices is untapped potential
  • Devices without cloud is the same—untapped potential

Bring together – IT Pros, Devs, End Users

  • Want all these groups to get value from the cloud
  • Public cloud can bring value to everything you do

IT Pros – Cloud

  • Stop thinking as private/public clouds as separate
  • The cloud is integral to your datacenter
    • Will let you deliver new things to your customers
  • Think about—the organization you choose for public cloud

Three attributes for public cloud provider

  • Scale – hyperscale – only three organizations doing this right now
    • Willing to back SLA financially
  • Enterprise proven
  • Hybrid
    • Build application on any cloud, deploy on any cloud

I submit..

  • If you use these attributes to pick provider, Microsoft stands alone

Example of organizations using Azure

  • Walsh, Paul Smith, NBC Olympics, Blink Box, TitanFall

What could you do with unlimited computing power?

  • TitanFall – Respawn

TitanFall vide

  • Figures out nearest data center
  • Spins up dedicated server
  • More consistent experience – no host advantage
  • “throw ’em a server”
  • Constantly available set of servers
  • AI powered by server – wouldn’t be feasible if it was client-hosted

Day one – 100,000 VMs running around the world

  • Controlled by <150 employees

Infrastructure of Azure

  • 16 different regions
  • Azure Active Directory – 2B authentications per day
  • Cloud is your design point
  • Foundation of data centers is Windows Server
  • Network of compatible clouds

Embrace cloud culture thru hybrid


  • Identity – AAD, rights management, Cloud App Discovery
  • Compute – IaaS, compute intensive VMs
  • Networking – dedicated high-speed link, ExpressRoute, redundancy
  • Storage – Analytics Platform System, Azure Files preview


  • Applications always require database
  • SQL Server 2014 – 30x increase in performance, w/o rewriting application (3000%)
  • Azure Redis Cache Preview
  • API Management Preview
    • Build API for your customers, share

Josh Twist – Demo on API Management

Story about Wellmark (health care provider), using Azure API Management

  • Partner-facing dev portal
  • Document that tells devs how to use API, dynamically generated
  • Can generate code samples, platform-specific

Publisher experience

  • Dashboard for API
  • Enable caching
  • Policy configuration –
    • Rate limit on API
    • Quotas

Back to developer portal

  • Developer console, generated by API platform
  • Can exercise API in real-time
  • Demo rate limit – hit several times, then get 429 Too Many Requests response

Insight into trends, usage, health of API – Analytics

  • Summary of API
  • Geographical distribution, can drill in, see state by state


  • Protect API, get insight
  • Available today in the Azure portal

Brad Anderson back

Business continuity

  • SQL Server 2014 AlwaysOn
  • Anti-malware added to Azure
  • w/TrendMicro, additional security options
  • Encrypted Storage for Office 365 – every file encrypted with its own key
  • Site Recovery Preview – (HyperV Replication Manager) – disaster recovery
    • Azure can be location where you store copy for disaster recovery
    • Seamlessly failover to Azure
    • Simple, can apply to all services in data center
    • Azure can monitor health of your VMs, auto launch failover stuff

Matt McSpirit – Senior Product Marketing Manager, Microsoft – Failover Demo

Site Recovery – secondary location can be Azure

  • Replication and recovery of your on-premises data centers
  • Number of protected clouds, all on-premises, managed by Systems Center
  • Configured for Replication and Recovery

Configuring cloud

  • Replication interval down to 30 secs
  • # recovery points
  • Encryption at rest

On premises networks mapped to Azure networks

How VMs failover in controlled way

  • Should be automatic
  • Recovery plans – series of steps that executed in event of a failover
  • Done in groups of services
  • Can be manual action – asks someone to do something, waits for OK

Initiating failover

  • Can test failover, prior to initiation
  • “Planned failover” – primary site still online
  • Just click

Brad Anderson back

Telemetry data

  • 100% success with failovers

Mobile side (mobile-first)

  • The cloud is integral your enterprise mobility
  • Devices are lit up with various apps, become extension of who we are

Enterprise mobile needs

  • How do you enable users to be productive on their devices, while keeping company secure
  • Three big categories for things that you need
    • Identity management
    • Device/application management
    • Information protection

Identity management

  • Leverage your investment in Active Directory
    • Common identity across all SaaS apps
  • First app to bring under management – e-mail
    • E.g. Outlook on iPad/Android
  • Will be bringing set of app wrappers on iOS/Android that will let you bring your apps to those platforms
  • Layered approach for protection
    • Protect device
    • Protect application
  • Protection that travels with file – Azure Rights Management service
    • Access rights travel with file

Enterprise Mobility Suite

  • $4/user/mon – independent of # devices

Azure Remote App

  • Remote desktop services that run in Azure
  • Remote app down to devices
  • 4,000,000 downloads of RDS apps on devices

Adam Hall – Senior Product Marketing Manager, Microsoft – Azure Remote App Demo

Hey, this is the third British guy that we’ve seen. Rule Britannia!

Users logs onto SaaS app, e.g. Office 365

Page

  • User goes to sign-in page, enters AAD creds, connects to Outlook
  • AAD administration portal
  • Security reports – 2B authentications / day
  • Sign-in from multiple geographical regions – users who have logged in from 2 geographies that you can’t travel between
    • Likely compromised account
    • Administrator can reset password or enforce multi-factor authentication
  • User list
  • Can look at which applications each user is accessing
    • 80% of users admit using non-approved applications

Cloud App Discovery tool – Azure RemoteApp

  • See what apps users are accessing
  • Can control which applications a user can use

Azure Remote App (new)

  • Can publish access to applications that users can access

Brad Anderson back again


  • What can cloud bring to developers
  • E.g. company that has 10,000 apps that users use
  • End user experience – based on apps that users use
  • Cloud is integral to accelerating application innovation


  • Multi device
  • Iterate quickly
  • Rapid development


  • Universal Windows Applications
    • Single project in Visual Studio, 90% of code applies across all Windows platforms

Cortana demo

  • First and only personal digital assistant in cloud
  • Powered by cloud – watches what I do
  • Surfaces up the things that are relevant to me
  • Search example
    • Shows things that are important
  • “What’s on my calendar for Thursday”
  • “Go see Godzilla Friday the 17th at 7PM”
    • “Just so you no..” you have – calendar conflict
  • “Find me the best barbecue in Houston”
    • Good reviews – interact with Yelp
  • It knows who I am
  • “Next time I speak with Chris, remind me to congratulate him on his daughter’s graduation”
  • Don’t have to speak, can interact with keyboard
  • “email from Chris”
    • Reminders show up
  • Speak name of app and then request
    • “Contoso find jobs near me”
    • Can enable your apps with Cortana

Mobile device management also works on phones

  • Historically – Rich experiences vs. Breadth of devices
  • .NET for Android and iOS
    • Through partnership with Xamarin
    • 60-70% of code in common

Hybrid Apps

  • Multi-device hybrid apps
  • HTML, but with much richer experience
  • Cordova partnership


  • Agile, Fast, Open

Brian Harry – Corporate Vice President, Microsoft – Application Lifecycle Management

Application Lifecycle Management

  • Key ingredient is integration

Visual Studio Online

  • Bring ALM value to cloud
  • >1 million users
  • SLA of 99.9%

Need to integrate tools with Visual Studio

  • New open APIs for Visual Studio online – allow integrating apps with VS Online

Richard White – CEO, Founder, UserVoice

Make user feedback actionable

Have done integration with Visual Studio

  • Prod Mgr & Eng Mgr
  • Looking at public portal
  • Product Manager – admin console
  • Can send item to Engineering team
  • Create Work Item – synchs to Visual Studio
  • On VS side, shows up on Backlog items

CanBan board

  • New => Approved
  • Add comment to Bug
  • Updated item in User Voice – see history of item
  • Prod Mgr can then mark item in User Voice as “started”
    • Will send message to users who said they wanted feature

Brad Anderson back again


  • How can users get value in Mobile-First, Cloud-First world
  • The cloud enables your users to do the things they’ve only dreamed of

Users video – What users can do

  • World has become giant network
  • Hyper connected world
  • What if you were connected to everything important
  • Plugged into everything that matters
  • Collaborating
  • Exchanging ideas
  • Listening to employees, customers, partners
  • Adapt quickly
  • Share information in real-time
  • One connected network
  • You understand customers
  • Power of the network
  • “Work like a network”


  • From information to action
    • Must focus
    • Technology must help us
  • Work like a network
    • Don’t lock up communications within organization
    • Share across organizations
    • Much more open environment
  • Responsible empowerment
    • Strike balance between corporate/personal

Information – Action

  • Need comprehensible data platform
  • Analytics engine
    • HDInsight, based on Hadoop
  • Get data to people
    • Using tools/capabilities that customers want to use
    • Surface everything through Excel
    • BI available in Office

Eron Kelly – General Manager, Microsoft

City of Barcelona – hosting festival

  • Bicing – Rent bike
  • Where are the bikes?
  • Measure social sentiment – what do users think?


  • Search from Excel
  • Look at “social sentiment data” up in cloud
  • Merge data set with existing analytics
  • Now, with map, can view various data

Q&A, asking question in natural language

  • “Total bikes available”
  • “Total bikes by street” – bar chart
  • “vs. xxx” – scatter chart
  • “tweet messages for streetname” – lists tweets

Brad Anderson again

Office Graph

  • Runs in cloud, understand individual’s intent
  • Surface data up to Office tools

Open Communications

  • What if all communications were “open” by default?

Mobile Productivity

  • Natively instrument Office 365 across all platforms
  • Entune
  • E.g. control where users save documents

Julia White – General Manager, Microsoft

Office 365 & Entune

OneDrive for Business

  • Struggling with “rogue” applications
  • 1 TB storage per user
  • Drag/drop content into web portal
  • Sync – syncs to local content in Windows Explorer
  • Share content – sharing icon – can enter people, from Active Directory, or from people outside company
    • Admin can control sharing
  • Collaboration
    • By default, edit in browser, but can edit in desktop app
    • Can see other people currently editing
    • Can see them editing in real-time
  • Go to ribbon, Tell Me, type “insert table”
    • Takes you right to command
  • Share with company – using Yammer – Post


  • Social aspect – how we work with groups of people
  • Future view of Groups, open communication
    • Group that people can find and join
    • Sure looks like Facebook

Outlook – now has Yammer Groups

  • Can attach to same conversation from Outlook
  • Can look at what groups some other guy is part of
  • Can view conversation in a particular group
  • Public vs. Private groups

Oslo app, built on Office Graph

  • Surfaces view of content that’s relevant to this user
  • Real-time stuff based on your actions, personalized
  • Tells you, for each item, what the source is—why it’s showing it to you
  • Some default searches – things “presented to me”
    • Based on meetings you’ve been in
  • Two views of Bill – org chart
    • Also how he’s spending time, who he’s working with
  • Can see what content is trending on that you also have permission for

Working on all devices, iPad demo

  • Open Word, can open document that you were just working on
  • “One source of the truth”
  • Unmistakably Word, but also really natural
  • Balance – familiar app experience, but fits into iOS experience
  • Easy to interact with touch on these devices
  • Context specific Ribbon command
  • Making changes live, will appear on other platforms
  • Always keep file fidelity, across all platforms
  • Co-editing also on iPad

New stuff coming in future

  • Office 365 & InTune
  • Mobile device management on Office 365 on iPad
  • IT can apply policies and controls
  • Outlook demo, open Excel
  • Only shows apps that IT has approved for business purposes
  • Better format, opening in Excel
  • Try sharing from native e-mail app on iPad => can’t paste document into e-mail
  • But can send document from Outlook
  • Business content stays only in managed/approved apps

Brad Anderson back again

Story about three customers using Office 365

  • Mitchells and Butlers – managing 20,000 mobile devices with 2 people
  • British Airways – 11,000 people on Yammer
  • Xx Golf – using InTune to manage laptops and iPads

Enterprise Mobility platform

  • Office 365
  • Mobility Suite
  • “Most comprehensive mobile solution”

The mobile-first cloud-first enterprise

  • We believe deeply in this

BUILD 2014 – Day 2 Keynote

BUILD 2014, San Francisco
Keynote 2 – Scott Guthrie, Rick Cordella (NBC), Mark Russinovich, Luke Kanies (Puppet), Daniel Spurling (Getty), Mads Kristensen, Yavor Georgiev, Grant Peterson (DocuSign), Anders Hejlsberg, Miguel de Icaza (Xamarin), Bill Staples, Steve Guggenheimer, John Shewchuk

Day 2, 3 Apr 2014, 8:30AM-11:30AM

Disclaimer: This post contains my own thoughts and notes based on watching BUILD 2014 keynotes and presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of Microsoft or of the presenters or speakers.

Scott Guthrie – EVP, Cloud + Enterprise


  • IaaS and PaaS
  • Windows & Linux
  • Developer productivity
  • Tons of new features in 2013
  • Lots more new features for 2014

Expanding Azure around world (green circles are Azure regions):

Run apps closer to your customers

Some stats:

Did he just say 1,000,000 SQL Server databases? Wow

Great experiences that use Azure

  • Titanfall
    • Powered by Azure

Video of Titanfall / Azure

  • Data centers all over
  • Spins up dedicated server for you when you play
  • “Throw ’em a server” – constantly available set of servers
  • AI & NPCs powered by server


  • Titanfall had >100,000 VMs deployed/running on launch day

Olympics NBC Sports (Sochi)

  • NBC used Azure to stream games
  • 100 million viewers
  • Streaming/encoding done w/Azure
  • Live-encode across multiple Azure regions
  • >2.1 million concurrent viewers (online HD streaming)

Olympics video:

Generic happy Olympics video here

Rick Cordella – NBC Sports (comes out to chat with Scott)


  • All the way from lowest demand event—curling (poor Curling)
  • Like that Scott has to prompt this guy—”how important is this to NBC”?
  • Scott—”I’m glad it went well”

Just Scott again

Virtual Machines

  • Can run both Windows and Linux machines
  • Visual Studio integration
    • Create, manage, destroy VMs from VS (smattering of applause)
  • Capture VM images with multiple storage drives
    • Then create VM instances from that capture
  • VM configuration
    • Use frameworks like Puppet, Chef, Powershell
    • Use modules to set various settings
    • Deploy to Puppet Master or Chef Server
    • Spin up server farm and deploy/manage using this master server

Mark Russinovich – Technical Fellow

Demo of creating VM from Visual Studio

  • Create VM
    • Deploy into existing cloud service
    • Pick storage account
    • Configure network ports
  • Debug VMs from Visual Studio on desktop
    • E.g. Client & web service
    • Switch to VM running web service
    • Set breakpoint in web service
    • Connect VS to machine in cloud
    • Enable debugging on VM
    • Rt-click on VM, Attach Debugger, pick process
    • Hit breakpoint, on running (live) service
  • This is great stuff..
  • Create copy of VM with multiple data disks
    • Save-AzureVMImage cmdlet => capture to VM image
    • Then provision new instance from previous VM image
    • Very fast to provision new VM—based on simple Powershell cmd
  • Integration with CM, e.g. Puppet
    • Create Puppet Masters from VM
    • Puppet Labs | Puppet Enterprise server template when creating VM
    • Create client and install Puppet Enterprise Agent into new client VM; point it to puppet master
  • Deploying code into VMs from Puppet Master – Luke Kanies (Puppet Labs)

Luke Kanies – CEO, Puppet Labs

Puppet works on virtually any type of device

  • Tens of millions of machines managed by Puppet
  • Clients: NASA, GitHub, Intel, MBNA, et al

Example of how Puppet works

Puppet demo

  • Puppet module, attach to machines from enterprise console
  • Deploy using this module
  • Goal is to get speed of configuration as fast as creation of VMs

Daniel Spurling – Getty Images

How is this (Azure) being used at Getty?

  • New way for consumer market to use images for non-commercial use
  • The technology has to scale, to support “massive content flow”
  • They uses Azure & Puppet
  • Puppet – automation & configuration management
  • Burst from their data center to external cloud (Azure only for extra traffic)?

Back to Scott Guthrie

Summary of IaaS stuff:

Also provide pre-built services and runtime environments (PaaS)

  • Focus on application and not infrastructure
  • Azure handles patching, load balancing, autoscale

Web functionality

  • Azure Web Sites

  • Push any type of application into web site
  • AutoScale—as load increases, Azure automatically scales
    • Handle large spikes
    • When traffic drops, it automatically scales back down
    • You save money
  • Staging support
    • Don’t want site in intermediate state, i.e. always available
    • Create Staging version of web app
    • Used for testing
    • Once tested, you push single command (Swap), rotate Production/Staging
    • Old Production still there, in case something went wrong
  • WebJobs
    • Run background tasks that aren’t HTTP response threads
    • Common thing—queue processing
    • So user response better because you just submit task to queue, then later process it
    • WebJobs—in same VM as web site
  • Traffic Manager
    • Intelligent customer routing
    • Spin up multiple instances of site across multiple regions
    • Single DNS entry
    • Automatically route to appropriate geographic location
    • If there’s a problem with one region, it automatically fails over to other regions
    • For VMs, Cloud Services, and Web Sites

Demo – Mads Kristensen

Mads Kristensen

ASP.NET application demo

  • PowerShell editor in Visual Studio
  • Example—simple site with some animated GIFs (ClipMeme)
  • One way to do this—from within Browser development tools, change CSS, then replicate in VS
  • Now—do change in Visual Studio
    • It automatically syncs with dev tools in browser
    • BrowserLink
  • Works for any browser
  • If you change in browser tools in one browser, it gets automatically replicated in VS & other tools
  • Put Chrome in design mode
    • As you hover, VS goes to the proper spot in the content
    • Make change in browser and it’s automatically synched back to VS
  • Example of editing some AngularJS
  • Publish – to Staging
    • Publishes just changes
    • “-staging” as part of URL (is this configurable? Or can external users hit staging version of site)
  • Then Swap when you’re ready to officially publish
    • Staging stuff over to production
  • WebJobs
    • Run background task in same context as web site
    • Section in Azure listing them
    • Build as simple C# console app
    • Associate this WebJob with a web site
    • In Web app in VS, associate to WebJob
    • Dashboard shows invocation of WebJob, with return values (input, output, call stack)
    • (No applause??)
  • Traffic Manager
    • Performance / Round Robin / Failover
    • Failover—primary node and secondary node
    • Pick endpoints
    • Web site says “you are being served from West US”—shows that we hit appropriate region

Back to Scott

Summary of Web stuff:


  • Including SSL cert with every web site instance (don’t have to pay Verisign)


Every Azure customer gets 10 free web sites

Mobile Services

  • Develop backends with .NET or Node.js
  • Can connect to any type of device
  • Data stores supported: Table Storage, SQL Database, Mongo DB (No SQL)
  • Can push messages to devices
    • Notification hubs – single message to notification hub, then broadcast to devices
  • Authentication options
    • Facebook, Google, now Active Directory
    • Uses standard OAuth token – use to authenticate on your service
    • Can use same token to access Office 365 APIs
    • Works with any device (iOS, Windows, Android)

Yavor Georgiev

Demo – Mobile Services

Mobile Service demo

  • New template for Mobile Service (any .NET language)
  • Built on Web API
  • E.g. ToDoItem and ToDoItemController
  • Supports local development
  • Test client in browser
  • Local / Remote debugging work with Mobile Services

Demo – building app to report problem with Facilities

  • FacilityRequest app
  • Using Entity Framework code-first with SQL database
  • Mobile Services Table Controller
  • Derives from TableController<T>
  • Add authentication to API by adding attribute to controller (assume service already supports Active Directory)
  • Publish – deploy to service
  • App logic put in portable class library – can use on a variety of platforms
  • Authentication
    • Use Active Directory authent library—gives you standard login user experience
    • After login, pulls Active Directory assets into client
  • Can integrate to SharePoint
    • Call out to Office 365 via REST API
    • SharePointProvider

Another great Microsoft demo line: “it’s just that easy”

More demo, Xamarin

  • Portable library, reuse with Xamarin
  • iOS project in Visual Studio
  • Run iPhone simulator from iOS
  • Switch to paired Mac
  • Same app on iOS, using portable class library

Yavor has clearly memorized his presentation—nice job, but a bit mechanical

Back to Scott alone

Azure Active Directory service

  • Active Directory in the cloud
  • Can synch with on premises Active Directory
  • Single sign-on with enterprise credentials
  • Then reuse token with Office 365 stuff

Grant Peterson – CTO, DocuSign

Demo – DocuSign

  • Service built entirely on Microsoft stack (SQL Server, C#, .NET, IIS)
  • Take THAT, iPhone app!
  • 3,000,000 downloads on iPhone so far
  • Can now authenticate with Active Directory

  • Then can send a document, etc.
  • Pull document up from SharePoint, on iPhone, and sign the document
  • He draws signature into document

  • Then saves doc back to     SharePoint
  • His code sample shows that he’s doing Objective C, not C#/Xamarin

Back to Scott

  • Scott confirms—you can use Objective C and have Active Directory API
    • iOS, Android SDK

  • Offline Data Sync !
  • Kindle support

Azure – Data

  • SQL Database – >1,000,000 databases now hosted

SQL Server improvements

  • Increasing DB size to 500GB (from 150GB)
  • New 99.95% SLA
  • Self service restore
    • Oops, if you accidentally delete data
    • Previously, you had to go to your backups
    • Now—automatic backups
    • You can automatically rollback based on clock time
    • 31 days of backups
    • Wow !
    • Built-in feature, just there
  • Active geo replication
    • Run in multiple regions
    • Can automatically replicate
    • Can have multiple secondaries in read-only
    • You can initiate failover to secondary region
    • Multiple regions at least 500 miles away
    • Data hosted in Europe stays in Europe (what happens in Brussels STAYS in Brussels)
  • HDInsight
    • Big data analytics
    • Hadoop 2.2, .NET 4.5
    • I love saying “Hadoop”

Let’s talk now about tools

.NET improvements – Language – Roslyn

Anders Hejlsberg – Technical Fellow


  • Compiler exposed as full API
  • C#/VB compilers now written in C# and VB (huh? VB compiler really written in VB??)

Demo – C# 6.0

  • Static usings
    • You type “using Math”
    • IDE suggests re-factoring to remove type name after we’ve adding using
  • Roslyn helps us see preview of re-factored code
  • Can rename methods, it checks validity of name

Announcement – open-sourcing entire Roslyn project

  • Looking at source code—a portion of the source code for the C# compiler
  • Anders publishes Roslyn live, on stage ! (that’s classy)

Demo – use Roslyn to implement a new language feature

  • E.g. French-quoted string literals
  • Lexer – tokenizing source code
  • ScanStringLiteral implementation, add code for new quote character
  • That’s incredibly slick..
  • Then launch 2nd instance of Visual Studio, running modified compiler
  • Holy crap
  • Re-factoring also automatically picks up new language feature

Can now use Roslyn compilers on other platforms

Miguel de Icaza – CTO, Xamarin

Demo – Xamarin Studio

  • Xamarin Studio can switch to use runtime with Roslyn compiler
  • E.g. pick up compiler change that Anders just submitted to Codeplex

Miguel gives a C# t-shirt to Anders—that’s classic

Back to Scott

Open Source

  • .NET foundation –
  • All the Microsoft stuff that they’ve put out as open source

  • Xamarin contributing various libraries
  • This is good—Microsoft gradually more accepting of the open source movement/community

Two more announcements

New Azure Portal

  • Scott mentions DevOps (a good thing)
  • First look at Azure Management Portal
  • “Bold reimagining”

Bill Staples – Director of PM, Azure Application Platform

Azure start board

Some “parts” on here by default

  • Service health – map
  • Can I make this a bit smaller?
  • Blade – drilldown into selected object – breadcrumb or “journey”

  • Modern navigational structure
  • Number one request—more insight into billing

  • “You’re never going to be surprised by bills again”
  • Creating instances:

  • Browse instances

Demo – Set up DevOps lifecycle

  • Using same services that Visual Studio Online uses
  • Continuous deployment—new web site with project to deploy changes

  • Open in Visual Studio
  • Commit from Visual Studio—to local repository and repository in cloud
  • Drill down into commits and even individual files
  • Looking at source code from portal

  • Can do commits from here, with no locally installed tools
  • Can do diffs between commits
  • Auto build and deploy after commit
  • “Complete DevOps lifecycle in one experience”

DevOps stuff

  • Billing info for the web app
  • Aggregated view of operations for this “resource group”
  • Topology of app
  • Analytics

  • Webtests – measure experience from customer’s point of view

  • “Average response time that the customer is enjoying”
  • Can re-scale up to Medium without re-deploying (finally!)

  • Database monitoring
  • I just have to say, that this monitoring stuff is just fantastic—all of the stuff that I was afraid I’ve have to build myself
  • Check Resource Group into source code – “Azure Resource Manager Preview”

PowerShell – Resource Management Service

  • Various templates that you can browse for various resource groups
  • E.g. Web Site w/SQL Database
  • Basically a JSON file—declarative description of the application
  • Can pass the database connection string from DB to web app
  • Very powerful stuff
  • Can combine these scripts with Puppet stuff

Azure portal on tablet, e.g. Surface

  • Or could put on big screen
  • Here’s the Office Developer site

  • Can see spike in page views, e.g. “what page was that”?

Azure Gallery:

“Amazing DevOps experience”

Back to Scott


  • New portal
  • Resource Manager – single deployment of a resource group
    • Can include IaaS and PaaS pieces
  • Visual Studio Online general availability

Get started –

Steve Guggenheimer – Corp VP & Chief Evangelist, Microsoft

Dialog starts with type of app and devices

Areas of feedback

  • Help me support existing investments
  • Cloud and Mobile first development
  • Maximize business opportunities across platforms

Should at least be very easy to work with “common core”:

“All or none”? — not the case

John Shewchuk – Technical Fellow


Support existing technologies

  • Desktop Apps
    • WinRT, WPF (e.g. Morgan Stanley)
    • Still going to build these apps in WPF

Demo – Typical App, Dental thing

  • Standard WPF app
  • Notice appointments in Office 365 for dentist
  • Active Directory in cloud is one of the big enablers
    • Full access to users’ calendar
    • Set ClientID GUID to hook application to Active Directory
  • WPF app that talks to Office 365 calendar service

New Office 365 APIs:


Demo – VB6 app – Sales Agent

  • VB6 App to Win Forms app
  • WebMap 2 transformation – UI projected out as HTML5
    • Win Forms form running as HTML5
  • Then move same app to phone

Internet of Things:


Demo of Flight app for pilots, running on Surface:

  • Value is—the combination of new devices, connecting to existing system


  • Building complimentary set of services

Flipboard on Windows 8

  • Already had good web properties working with HTML5
  • Created hybrid app, good native experience
  • Brought app to phone (technology preview)

  • Nokia 520, Flipboard fast on cheap/simple phone (uses DirectX)

Foursquare – tablet and Windows Phone app

  • Win Phone Silverlight 8.1
  • Geofences, little program packaged with app
    • Run operation goes to see nearby venues
    • E.g. Person goes to a geo location and live tile for Foursquare app pops up content

App showing pressure on foot – Heapsylon Sensoria socks

John Gruber – Daring Fireball

  • Video – partnership

  • Vesper – Notes app
  • Using Mobile Services on Azure
  • Wow—we’ve got John Gruber evangelizing about Windows? Whoda-thunkit!

Gobbler – service for musicians and other “Creatives”

  • Video

  • Communication between musicians and collaborator
  • DJ/Musician – sending files back and forth without managing data themselves
  • Everything on Azure – “everything that we needed was already there”

Gaming – PC gaming, cloud assistance

  • Destroying building, 3D real-time modeling
  • Frame rate drops
  • Overwhelms local machine (even high-end gaming gear)
  • But then run the same app and use cloud and multiple devices to do cloud computation
  • Keep frame rate high
  • Computation on cloud, rendering on client (PC)

Cloud-assist on wargaming game on PC:

Demo – WebGL in IE, but on phone:

Babylon library (Oculus):

  • Oculus rift, on PC, WebGL
  • Running at 200Hz

Cross-platform, starting in Windows family, then spreading out

  • Fire breathing with a 24 GoPro array
  • Video

  • Not really clear what’s going on here.. Skiing, guy with tiger, etc.
  • What’s the connection to XBox One and Windows 8?
  • Ok, In App purchases ?

Doodle God 2 on XBox:

  • Also running on PC
  • C++ / DirectX
  • Same set of files, running on Phone, XBox, PC
    • Just a couple of minor #ifdefs
  • Take an existing investment and spread it across multiple environments
  • Use cloud to connect various aspects of game together

Partnership with Oracle, Java in Azure

  • Demo in Azure portal
  • Click Java version in web site settings
  • Java “incredibly turnkey”

Accela demo

  • They want to create app powered by Accela data
  • Split out pieces of URL
    • data – from
    • news – elsewhere
    • etc
  • Common identity across many services
  • Code is out on codeplex
  • Application Request Routing (ARR)

Make something in Store available as both web site and app

  • New tool – App Studio – copy web site, expose as App
    • Then drop app into Store
  • App Studio produces Web App template
    • Driven by JSON config file
  • Challenges in wrapping a web site
    • What do you do when there is no network?
    • App just gets big 404 error
    • But you want app responsive both online and offline
  • New feature – offline section in Web App template
    • useSuperCache = true
    • Store data locally
    • Things loaded into local cache
    • THEN unplug from network
    • Then fully offline, but you can still move around in app, locally cached
  • Do the same thing, app on Windows phone
    • Dev has done some responsive layout stuff
  • Then Android device
    • Same web app runs here

Zoopla app on Windows Phone

  • Can easily take web content, bring to mobile app
  • Include offline

Xamarin – Bring Windows app onto iPad

  • Windows Universal project
  • Runs on iPad, looks like Win 8 app with hub, etc.
  • Also running on Android tablet
  • How is this working? HTML5?

Stuff available on iOS and Android:

All done!








Azure Pricing for Web and Worker Roles

Here’s a simple chart showing current Azure pricing for web and worker roles.  The chart shows the per-month cost based on the # of instances and a particular instance size.


Instance Size
# instances XS S Med Lg Xlarge
1 $14.40 $86.40 $172.80 $345.60 $691.20
2 $28.80 $172.80 $345.60 $691.20 $1,382.40
3 $43.20 $259.20 $518.40 $1,036.80 $2,073.60
4 $57.60 $345.60 $691.20 $1,382.40 $2,764.80
5 $72.00 $432.00 $864.00 $1,728.00 $3,456.00
6 $86.40 $518.40 $1,036.80 $2,073.60 $4,147.20
7 $100.80 $604.80 $1,209.60 $2,419.20 $4,838.40
8 $115.20 $691.20 $1,382.40 $2,764.80 $5,529.60
9 $129.60 $777.60 $1,555.20 $3,110.40 $6,220.80
10 $144.00 $864.00 $1,728.00 $3,456.00 $6,912.00
11 $158.40 $950.40 $1,900.80 $3,801.60 $7,603.20
12 $172.80 $1,036.80 $2,073.60 $4,147.20 $8,294.40



Here’s a chart that shows the same data.


Session – Services Symposium: Enterprise Grade Cloud Applications

PDC 2008, Day #4, Session #2, 1 hr 30 mins

Eugenio Pace

My second session on Thursday was a continuation of the cloud services symposium from the first session.  There was a third part to the symposium, which I did not attend.

The presenter for this session, Eugenio, was not nearly as good a presenter as Gianpaolo from the previous session.  So it was a bit less dynamic, and harder to stay interested.

The session basically consisted of a single demo, which illustrated some of the possible solutions to the Identity, Monitoring, and Integration challenges mentioned in the previous session.


Eugenio pointed out the problems involved in authentication/authorization.  You don’t want to require the enterprise users to have a unique username/password combination for each service that they use.  And pushing out the enterprise (e.g. Active Directory) credential information to the third party service is not secure and creates a management nightmare.

The proposed solution is to use a central (federated) identity system to do the authentication and authorization.  This is the purpose of the Azure Access Control service.


The next part of the demo showed how Azure supports remote management, on the part of IT staff at an individual customer site, of their instance of your application.  The basic things that you can do remotely include:

  • Active real-time monitoring of application health
  • Trigger administrative actions, based on the current state

The end result (and goal) is that you have the same scope of control over your application as you’d have if it were on premises.

Application Integration

Finally, Eugenio did some demos related to “process integration”—allowing your service to be called from a legacy service or system.  This demo actually woke everyone up, because Eugenio brought an archaic green-screen AS400 system up in an emulator and proceeded to have it talk to his Azure service.


The conclusions were recommendations to both IT organizations and ISVs:

  • Enterprise IT organization
    • Don’t settle for sub-optimal solutions
    • Tap into the benefits of Software+Services
  • ISV
    • Don’t give them an excuse to reject your solution
    • Make use of better tools, frameworks, and services

Session – Services Symposium: Expanding Applications to the Cloud

PDC 2008, Day #4, Session #1, 1 hr 30 mins

Gianpaolo Carraro

As the last day of PDC starts, I’m down to four sessions to go.  I’ll continue doing a quick blog post on each session, where I share my notes, as well as some miscellaneous thoughts.

The Idea of a Symposium

Gianpaolo started out by explaining that they were finishing PDC by doing a pair of symposiums, each a series of three different sessions.  One symposium focused on parallel computing and the other on cloud-based services.  This particular session was the first in the set of three that addressed cloud services.

The idea of a symposium, explained Gianpaolo, is to take all of the various individual technologies and try to sort of fit the puzzle pieces together, providing a basic context.

The goal was also present some of the experience that Microsoft has gained in early usage of the Azure platform over the past 6-12 months.  He said that he himself has spent the last 6-12 months using the new Services, so he had some thoughts to share.

This first session in the symposium focused on taking existing business applications and expanding them to “the cloud”.  When should an ISV do this?  Why?  How?

Build vs. Buy and On-Premises vs. Cloud

Gianpaolo presented a nice matrix showing the two basic independent decisions that you face when looking for software to fulfill a need.

  • Build vs. Buy – Can I buy a packaged off-the-shelf product that does what I need?  Or are my needs specialized enough that I need to build my own stuff?
  • On-Premises vs. Cloud – Should I run this software on my own servers?  Or host everything up in “the cloud”?

There are, of course, tradeoffs on both sides of each decision.  These have been discussed ad infinitum elsewhere, but the basic tradeoffs are:

  • Build vs. Buy – Features vs. Cost
  • On-Premises vs. Cloud – Control vs. Economy of Scale

Here’s the graph that Gianpaolo presented, showing six different classes of software, based on how you answer these questions.  Note that on the On-Premises vs. Cloud scale, there is a middle column that represents taking applications that you essentially control and moving them to co-located servers.

This is a nice way to look at things.  It shows that, for each individual software function, it can live anywhere on this graph.  In fact, Gianpaolo’s main point is that you can deploy different pieces of your solution at different spots on the graph.

So the idea is that while you might start off on-premises, you can push your solution out to either a co-located hosting server or to the cloud in general.  This is true of both packaged apps as well as custom-developed software.


The main challenge in moving things out of the enterprise is dealing with the various issues that show up now when your data needs to cross the corporate/internet boundary.

There are several separate types of challenges that show up:

  • Identify challenges – as you move across various boundaries, how does the software know who you are and what you’re allowed to access?
  • Monitoring and Management challenges – how do you know if your application is healthy, if it’s running out in the cloud?
  • Application Integration challenge – how do various applications communicate with each other, across the various boundaries?

Solutions to the Identity Problem

Gianpaolo proposed the following possible solutions to this problem of identity moving across the different boundaries:

  • Federated ID
  • Claim-based access control
  • Geneva identity system, or Cardspace

The basic idea was that Microsoft has various assets that can help with this problem.

Solutions to the Monitoring and Management Problem

Next, the possible solutions to the monitoring and management problem included:

  • Programmatic access to a “Health” model
  • Various management APIs
  • Firewall-friendly protocols
  • Powershell support

Solutions to the Application Integration Problem

Finally, some of the proposed solutions to the application integration problem included:

  • ServiceBus
  • Oslo
  • Azure storage
  • Sync framework

The ISV Perspective

The above issues were all from an IT perspective.  But you can look at the same landscape from the perspective of an independent software vendor, trying to sell solutions to the enterprise.

To start with, there are two fundamentally different ways that the ISV can make use of “the cloud”:

  • As a service mechanism, for delivering your services via the cloud
    • You make your application’s basic services available over the internet, no matter where it is hosted
    • This is mostly a customer choice, based on where they want to deploy
  • As a platform
    • Treating the cloud as a platform, where your app runs
    • Benefits are the economy of scale
    • Mostly an ISV choice
    • E.g. you could use Azure without your customer even being aware of it

When delivering your software as a service, you need to consider things like:

  • Is the feature set available via cloud sufficient?
  • Firewall issues
  • Need a management interface for your customers

Some Patterns

Gianpaolo presented some miscellaneous design considerations and patterns that might apply to applications deployed in the cloud.


  • Design for average load, handling the ‘peak’ as an exception
  • I.e. only go to the cloud for scalability when you need to

Worker / Queue / Blob Pattern

Let’s say that you have a task like encoding and publishing of video.  You can push the data out to the cloud, where the encoding work happens.  (Raw data places in a “blob” in cloud storage).  You then add an entry to a queue, indicating that there is work to be done, and a separate worker process eventually does the encoding work.

This is a nice pattern for supporting flexible scaling—both the queues and the worker processes could be scaled out separately.

CAP: Pick 2 out of 3

  • Consistency
  • Availability
  • Tolerance to network Partitioning

Eventual Consistency (ACID – BASE)

The idea here is that we are all used to the ACID characteristics listed below.  We need to guarantee that the data is consistent and correct—which means that performance likely will suffer.  As an example, we have a process submit data synchronously because we need to guarantee that the data gets to its destination.

But Gianpaolo talked about the idea of “eventual consistency”.  For most applications, while it’s important for your data to be correct and consistent, it’s not necessarily for it to be consistent right now.  This leads to a model that he referred to as BASE, with the characteristics listed below.

  • ACID
    • Atomicity
    • Consistency
    • Isolation
    • Durability
  • BASE
    • Basically Available
    • Soft state
    • Eventually consistent

Fundamental Lesson

Basically the main takeaway is:

  • Put the software components in the place that makes the most sense, given their use

Session – Building Mesh-Enabled Web Applications Using the Live Framework

PDC 2008, Day #3, Session #5, 1 hr 15 mins

Arash Ghanaie-Sichanie

Throughout the conference, I bounced back and forth between Azure and Live Mesh sessions.  I was trying to make sense of the difference between them and understand when you might use one vs. the other.

I understand that Azure Services is the underlying platform that Live Services, and Live Mesh, are built on.  But at the outset, it still wasn’t quite clear what class of applications Live Services were targeted at.  Which apps would want to use Live Services and which would need to drop down and use Azure?

I think that after Arash’s talk, I have a working stab at answering this question.  This is sort of my current understanding, that I’ll update as things become more clear.

Question: When should I use Live Services / Live Mesh ?


  • Create a Live Mesh application (Mesh-enabled) if your customers are currently, or will become, customers.
  • Otherwise, use Azure cloud-based services outside of Live, or the other services built on top of Azure:
    • Azure storage services
    • Sync Framework
    • Service Bus
    • SQL Data Services

Unless I’m missing something, you can’t make use of features of the Live Operating Environment, either as a web application or a local desktop client, unless your application is registered with, and your user has added your application to his account.

The one possible loophole that I can see is that you might just have your app always authorize, for all users, using a central account.  That account could be pre-created and pre-configured.  Your application would then use the same account for all users, enabling some of the synchronization and other options.  But even this approach might not work—it’s possible that any device where local Mesh data is to be stored needs to be registered with that Live account and so your users wouldn’t normally have the authorization to join their devices to your central mesh.

Three Takeaways

Arash listed three main takeaways from this talk:

  • Live Services add value to different types of applications
  • Mesh-enabled web apps extend web sites to the desktop
  • The Live Framework is a standards-based API, available to all types of apps

How It All Works

The basic idea is to start with a “Mesh-enabled” web site.  This is a web site that delivers content from the user’s Mesh, including things like contacts and files.  Additionally, the web application could store all of its data in the Mesh, rather than on the web server where it is hosted.

Once a web application is Mesh-enabled, you have the ability to run it on various devices.  You basically create a local client, targeted at a particular platform, and have it work with data through the Live Framework API.  It typically ends up working with a cached local copy of the data, and the data is automatically synchronized to the cloud and then to other devices that are running the same application.

This basically implements the Mesh vision that Ray Ozzie presented first at Mix 2008 in Las Vegas and then again at PDC 2008 in Los Angeles.  The idea is that we move a user’s data into the cloud and then the data follows them around, no matter what device they’re currently working on.  The Mesh knows about a user’s:

  • Devices
  • Data, including special data like Contacts
  • Applications
  • Social graph  (friends)

The User’s Perspective

The user must do a few things to get your application or web site pointing at his data.  As a developer, you send him a link that lets him go sign up for a account and then register your application in his Mesh.  Registration, from the user’s perspective, is a way for him to authorize your application to access his Mesh-based data.

Again, it’s required that the user has, or get, a account.  That’s sort of the whole idea—we’re talking about developing applications that run on the Live platform.

Run Anywhere

There is still work to be done, on the part of the developer, to be able to run the application on various devices, but the basic choices are:

  • Locally, on a client PC or Mac
  • In a web browser, anywhere
  • From the Live Desktop itself, which is hosted in a browser

(In the future, with Silverlight adding support for running Silverlight-sandboxed apps outside of the browser, we can imagine that as a fourth option for Mesh-enabled applications).

The Developer’s Perspective

In addition to building the mesh application, the developer must also register his application, using the Azure Services Portal.  Under the covers, the application is just an Azure-based service.  And so you can leverage the Azure standard goodies, like running your service in a test mode and then deploying/publishing it.

One other feature that is made available to Mesh application authors is the ability to view basic Analytics data for your application.  Because the underlying Mesh service is aware of your application, wherever it is running, it can collect data about usage.    The data is “anonymized”, so you can’t see data about individual users, but can view general metrics.

Arash talked in some detail about the underlying security model, showing how a token is granted to the user/application.


Mesh obviously seems a good fit for some applications.  You can basically run on a platform that gives you a lot of services, as well as access to useful data that may already exist in a particular user’s Mesh environment.  There is also some potential for cross-application sharing of data, once common data models are agreed upon.

But the choice to develop on the Mesh platform implies a decision to sign your users up as part of the Mesh ecosystem.  While the programming APIs are entirely open, using basic HTTP/REST protocols, the platform itself is owned/hosted/run exclusively by Microsoft.

Not all of your users will want to go through the hassle of setting up a Live account in order to use your application.  What makes it a little worse is that the process for them is far from seamless.  It would be easier if you could hide the Live branding and automate some of the configuration.  But the user must actually log into and authorize your application.  This is a pretty high barrier, in terms of usability, for some users.

This also means that your app is betting on the success of the platform itself.  If the platform doesn’t become widely adopted, few users will live (pardon the pun) in that environment.  And the value of hosting your application in that ecosystem becomes less clear.

It remains to be seen where Live as a platform will go.  The tools and the programming models are rich and compelling.  But whether the platform will live up to Ozzie’s vision is still unclear.

Session – Windows Azure Tables: Programming Cloud Table Storage

PDC 2008, Day #3, Session #3, 1 hr 15 mins

Pablo Castro, Niranjan Nilakantan

Pablo and Niranjan did a session that went into some more detail on how the Azure Table objects can be used store data in the cloud.


This talk dealt with the “Scalable Storage” part of the new Azure Services platform.  Scalable Storage is a mechanism by which applications can store data “in the cloud” in a highly scalable manner.

Data Types

There are three fundamental data types available to applications using Azure Storage Services:

  • Blobs
  • Tables
  • Queues

This session focused mainly on Tables.  Specifically, Niranjan and Pablo addressed the different ways that an application might access the storage service programmatically.


Tables are a “massively scalable” data type for cloud-based storage.  They are able to store billions of rows, are highly available, and “durable”.  The Azure platform takes care of scaling out the data automatically to multiple servers, if necessary.  (With some hints on the part of the developer).

Programming Model

Azure Storage Services are accessed through the ADO.NET Data Services (Astoria).  Using ADO.NET Data Sercices, there are basically two ways for an application to access the service.

  • .NET API   (System.Data.Services.Client)
  • REST interface   (using HTTP URIs directly)

Data Model

It’s important to note that Azure knows nothing about your data model.  It does not store data in a relational database or access it via a relational model.  Rather, you specify a Table that you’d like to store data in, along with a simple query expression for the data that you’d like to retrieve.

A Table represents a single Entity and is composed of a collection of rows.  Each row is uniquely defined by a Row Key, which the developer specifies.  Additionally, the developer specifies a Partition Key, which is used by Azure in knowing how to split the data across multiple servers.

Beyond the Record Key and Partition Key, the developer can add any other properties that she likes, up to a total of 255 properties.  While the Record and Partition Keys must be string data types, the other properties support other data types.


Azure storage services are meant to be automatically scalable, meaning that the data will be automatically spread across multiple servers, as needed.

In order to know how to split up the data, Azure uses a developer-specified Partition Key, which is one of the properties of each record.  (Think “field” or “column”).

The developer should pick a partition key that makes sense for his application.  It’s important to remember two things:

  • Querying for all data having a single value for a partition key is cheap
  • Querying for data having multiple partition key values is more expensive

For example, if your application often retrieves data by date and shows data typically for a single day, then it would make sense to have a CurrentData property in your data entity and to make that property the Partition Key.

The way to think of this is that each possible unique value for a Partition Key represent a “bucket” that will contain one or more records.  If you pick a key that results in only one record per bucket, that would be inefficient.  But if you pick a key that results in a set of records in the bucket that you are likely to ask for together, this will be efficient.

Accessing the Data Programmatically

Pablo demonstrated creating the classes required to access data stored in an Azure storage service.

He started by creating a class representing the data entity to be stored in a single table.  He selected and defined properties for the Partition and Record key, as well as other properties to store any other desired data in.

Pablo also recommended that you create a single class to act as an entry point into the system.  This class then acts as a service entry point for all of the data operations that your client application would like to perform.

He also demonstrated using LINQ to run queries against the Azure storage service.  LINQ automatically created to corresponding URI to retrieve, create, update, or delete the data.


Pablo and Niranjan also touched on a few other issues that most applications will deal with:

  • Dealing with concurrent updates  (uses Etag and if-match)
  • Pagination  (using continuation tokens)
  • Using Azure Queues for pseudo-transactional deletion of data


Pablo and Niranjan demonstrated that it was quite straightforward to access Azure storage services from a .NET application.  It’s also the case that non-.NET stacks could make use of the same services using a simple REST protocol.

It was also helpful to see how Pablo used ADO.NET Data Services to construct a service layer on top of the Azure storage services.  This seems to make consuming the data pretty straightforward.

(I still might have this a little confused—it’s possible that Astoria was just being used to wrap Azure services, rather than exposing the data in an Astoria-based service to the client.  I need to look at the examples in a little more detail to figure this out).

Original Materials

You can find the video of the session at:

Session – Live Services: Live Framework Programming Model Architecture and Insights

PDC 2008, Day #3, Session #1, 1 hr 15 mins

Ori Amiga

My next session dug a bit deeper into the Live Framework and some of the architecture related to building a Live Mesh application.

Ori Amiga was presenting, filling in for Dharma Shukla (who just became a new Dad).


It’s still a little unclear what terminology I should be using.  In some areas, Microsoft is switching from “Mesh” to just plain “Live”.  (E.g. Mesh Operating Environment is now Live Operating Environment).  And the framework that you use to build Mesh applications is the Live Framework.  But they are still very much talking about “Mesh”-enabled applications.

I think that the way to look at is this:

  • Azure is the lowest level of cloud infrastructure stuff
  • The Live Operating Environment runs on top of Azure and provides some basic services useful for cloud applications
  • Mesh applications run on top of the LOE and provide access to a user’s “mesh”: devices, applications and data that lives inside their mesh

I think that this basically means that you could have an application that makes use of the various Live Services in the Live Operating Environment without actually being a Mesh application.  On the other hand, some of the services in the LOE don’t make any sense to non-Mesh apps.

Live Operating Environment  (LOE)

Ori reviewed the Live Operating Environment, which is the runtime that Mesh applications run on top of.  Here’s a diagram from Mary Jo Foley’s blog:

This diagram sort of supports my thought that access to a user’s mesh environment is different from the basic stuff provide in the LOE.  According to this particular view, Live Services are services that provide access to the “mesh stuff”, like their contact lists, information about their devices, the data stores (data stored in the mesh or out on the devices), and other applications in that user’s mesh.

The LOE would contain all of the other stuff—basic a set of utility classes, akin to the CLR for desktop-based applications.  (Oh wait, Azure is supposed to be “akin to the CLR”). *smile*

Ori talked about a list of services that live in the LOE, including:

  • Scripting engine
  • Formatters
  • Resource management
  • FSManager
  • Peer-to-peer communications
  • HTTP communications
  • Application engine
  • Apt(?) throttle
  • Authentication/Authorization
  • Notifications
  • Device management

Here’s another view of the architecture (you can also find it here).

Also, for more information on the Live Framework, you can go here.

Data in the Mesh

Ori pointed out an important point about how Mesh applications access their data.  If you have a Mesh client running on your local PC, and you’ve set up its associated data store to synch between the cloud and that device, the application uses local data, rather than pulling data down from the cloud.  Because it’s working entirely with locally cached data, it can run faster than the corresponding web-based version (e.g. running in the Live Desktop).

Resource Scripts

Ori talked a lot about resource scripts and how they might be used by a Mesh-enabled application.  An application can perform actions in the Mesh using these resource scripts, rather than performing actions directly in the code.

The resource scripting language contains things like:

  • Control flow statements – sequence and interleaving, conditionals
  • Web operation statements – to issue HTTP POST/PUT/GET/DELETE
  • Synchronization statements – to initiate data synchronization
  • Data flow constructs – for binding statements to other statements(?)

Ori did a demo that showed off a basic script.  One of the most interesting things was how he combined sequential and interleaved statements.  The idea is that you specify what things you need to do in sequence (like getting a mesh object and then getting its children), and what things you can do in parallel (like getting a collection of separate resources).  The parallelism is automatically taken care of by the runtime.

Custom Data

Ori also talked quite a bit about how an application might view its data.  The easiest thing to do would be to simply invent your own schema and then be the only app that reads/writes the data in that schema.

A more open strategy, however, would be to create a data model that other applications could use.  Ori talked philosophically here, arguing that this openness serves to improve the ecosystem.  If you can come up with a custom data model that might be useful to other applications, they could be written to work with the same data that your application uses.

Ori demonstrated this idea of custom data in Mesh.  Basically you create a serializable class and then mark it up so that it gets stored as user data within a particular DataEntry.  (Remember: Mesh objects | Data feeds | Data entries).

This seems like an attractive idea, but it seems a bit clunky.  The custom data is embedded into the standard AtomPub stream, but not in a queryable way.  It looked more like it was jammed into an XML element in the <DataEntry> element.  This means that your custom data items would not be directly queryable.

Ori did go on to admit that custom data isn’t queryable or indexable, but really only for “lightweight data”.  This is really at odds with the philosophy of a reusable schema for other applications.

Tips & Tricks

Finally, Ori presented a handful of tips & tricks for working with Mesh applications:

  • To clean out local data cache, just delete the DB/MR/Assembler directories and re-synch
  • Local metadata is actually sotred in SQL Server Express.  Go ahead and peek at it, but be careful not to mess it up.
  • Use the Resource Model Browser to really see what’s going on under the covers.  What it shows you represents the truth of what’s happening between the client and the cloud
  • One simple way to track synch progress is to just look at the size of the Assembler and MR directories
  • Collect logs and send to Microsoft when reporting a problem


Ori finished with the following summary:

  • Think of the cloud as just a special kind of device
  • There is a symmetric cloud/client programming model
  • Everything is a Resource