That Conference 2017 – Intro to Docker

That Conference 2017, Kalahari Resort, Lake Delton, WI
Intro to Docker – John Ptacek

Day 1, 7 Aug 2017

Disclaimer: This post contains my own thoughts and notes based on attending That Conference 2017 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of the speakers.

Executive Summary

  • Docker containers make deploying a running application easier and more repeatable
  • Tooling exists to deploy to Azure or AWS
  • Microsoft is definitely pushing Docker as a unit of deployment

Docker

  • Not a programming language
  • Not a framework

 

What Docker Is

  • Virtualization technology
    • Server -> VM -> PaaS -> Containers
  • Tool for software
    • Installation, removal, upgrading
    • Distribution
    • Trusting
    • Managing

 

Docker

  • Automates setup/configuration development environments
    • E.g. Far quicker to get up and running with a typical server environment
  • Happy pplace for developers and infrastructure
    • Avoids typical friction between devops and devs
  • Analogy for shipping containers
    • Contains everything you need; container moved from place to place
    • Encapsulation
  • Native for Linux
    • Linux’ containerization strategy became starting point for Docker
  • Installs for OS X and Windows
    • But a bit finicky sometimes
  • It’s Plug In Software (rather than installed software)
    • If it works on your dev machine, it will work anywhere

 

Why Docker?

  • Resource efficiency
    • 26-1 perf improvement over VM
  • Speed
    • Smaller units, quicker build/deploy
    • Leads to talking about microservices
  • Agile
    • Still virtualization, Ship Code containers (ship containers)
  • Lightweight
    • Just Enough OS – only the bits you need in the OS are turned on
  • Consistency
    • Portability and reproducibility. DockerFile, not Click
    • Not relying on how somebody does a particular install or configuration
    • Works on my machine, then it works
  • Microsoft says so
    • Microsoft is moving a lot of stuff towards Docker
    • SPS: How does this strategy fit in with Azure architectures
  • SharePoint story
    • Get things set up at home, but then have trouble rolling out SharePoint on customer’s server
    • Because they don’t necessarily have the same stuff enabled on their server
  • Environmental Drift
    • Avoids having to set up big dev environment to go back and support something from several years ago
    • Docker allows entire environment to be easily reproduced, using Docker File
  • Cattle, not pets
    • Care and feeding that happens when you set up server – pets
    • With cattle, you don’t care that much if one of the servers drops out

 

Things

  • Containers are not virtualization, as we know it
  • Containers can be thought of as shipping containers
    • Everything embedded that you need
  • Containers contain images
  • Images are bundled snapshot
  • Images are deployable
    • Can spin up in Azure, AWS, Google, or in-house
    • Can also move these between platforms
    • Containers don’t tie you to platform
  • Compatibility between Oses
    • Build on OS X, run on Windows
  • What app store did for mobile
    • Docker like multi-platform app store for developers
  • Finicky for OS X and Windows
  • Security by isolation from OS
    • Can’t get from container back to host operating system
    • Helps manage risk
  • Images – immutable file system with execution parameters
  • Containers – instantiations of images (running or inactive)

 

Demo – Simple node.js application

  • Sends a message to console

 

Running it

  • Can run on dev box directly, i.e. running Node

 

Running with Docker

  • Dockerfile – contains instructions for how to run within container
    • Pull down Node
    • Make a directory for app, one for user
    • Do npm install for dependent packages
    • Copy source code down
    • ENTRYPOINT – Says the command to run to start up container
  • .dockerignore – what not to bring down
    • e.g. node_modules
  • Build the container
    • docker build – it runs through docker file and sets everything up
    • At the end, it gives you a number
  • The container exists on the dev machine at this point
  • Now run this image
    • Can map docker’s port to port on host machine
    • docker run
    • Can hit the site at the new port
  • Log on to container
    • docker exec – runs bash shell
  • docker ps
    • List images that are running

 

Demo 2 – Calculating Pi

  • Running locally – spits out a bunch of debug info to console
  • docker run – can run in container
  • Winston logger
    • Log out to directory
    • Can log to external location
  • Typically, you wouldn’t persist data to the Docker container
    • Because the data is lost when you power down the container
    • You’d typically persist data elsewhere

 

Deploy container to production server

  • DockerHub – place for you to deploy your applications to
    • It’s not production yet
  • First, test on your local machine
  • Publish to DockerHub – docker login
    • Tag the image that you created/tested
    • docker push – push to docker hub
  • Push to production, e.g. Azure
    • Docker Container settings in Azure
  • Should work the same on Azure
    • Typically takes 10-15 mins to deploy out to Azure
  • Can do the same for AWS–deploy the same container to AWS
    • “A bit more painful”
    • Use AWS command line tools (you have to install them)
    • Quirky to get push to work
  • Could even envision running the same app on both AWS and Azure
    • More flexibility to react to down time

 

Things

  • Docker pull – pull image from docker hub
  • Docker run – run an image
  • Docker images – list images
  • Docker ps – list all running containers
  • Docker rm – delete docker container
  • Docker rmi – delete docker image

 

Questions

  • More than just Azure and AWS?
    • Yes, e.g. Google, or could build infrastructure in-house
    • But simpler to deploy to Azure or AWS
  • (Sean): Would you typically run SQL Server inside a Docker container?
    • Yes, you can definitely run SQL Server in a Docker container
  • Difference between container and image?
    • Container is sort of the definition of what needs to run, immutable
    • Images are the pieces that start up
    • The image is the running instance
  • If I have various configurations for different environments, how does Docker file manage this?
    • Yes, could use environment variables that get read in the dockerfile
  • Why do you do node:latest, won’t this lead to trouble in the future?
    • In production, you’d want to hard-code a version that you know works for you

That Conference 2017 – SASS and CSS for Developers

That Conference 2017, Kalahari Resort, Lake Delton, WI
SASS and CSS for Developers – Robert Boedigheimer

Day 1, 7 Aug 2017

Disclaimer: This post contains my own thoughts and notes based on attending That Conference 2017 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of the speakers.

Executive Summary

  • Use CSS for styling your web site
  • SASS makes writing CSS much easier
  • Lots of great constructs for modern UI in both CSS and SASS
  • Some good 3rd party libraries out there (e.g. Bourbon)

About Me

CSS

  • Cascading Style Sheet
  • Describes presentation of a document

Why CSS?

  • Site-wide changes by modifying the styles
  • Support multiple views
  • Leaner pages
  • http://Csszengarden.com
    • Shows how difference style sheets leads to radically different views

Style Rule

  • Selector
    {
    property: value;
    }

Using CSS

  • Inline
    • Style attribute directly on elements
  • Embedded
    • Styles at top of page in <head>
  • External style sheet
    • Style rules collected in .css file
    • Include via <link>
    • (Almost) always do it this way

Cascading

  • Inline first, then top of page, then external

Basic Selectors

  • Element, e.g. h1, div
  • Id
    • #idName
  • Class
    • One or more elements can have this class value
    • .classname

Pseudo Elements and Classes

  • Styles not based on document structure
  • Begins with colon
  • E.g. Style around focused element
    • a:focus { …

Inheritance

  • Elements from document parsed into tree based on structure
    • E.g. Apply style to body, inherited by <div>, <h1> in body

Colors

  • Named colors
    • 17 in CSS 2.1, 140 more in CSS 3
  • RGB

Box Model

  • Margin on outside
  • Border
  • Padding on inside

Common Techniques

  • Cursor
  • display:none – visibility, can turn on/off
  • text-transform – universally apply transform to text, e.g. all caps
  • z-index – overlapping, higher is on top
  • Non-scrolling areas

Reset or Normalize Stylesheet

  • Goal is to reduce browser inconsistencies
  • Reset – set default styles, apply everything yourself
  • Normalize.css – the same across all browsers

RB: Going to Bootstrap for all of our web properties now

CSS 3 Background

  • Series of “Modules”
    • Easier to manage than one big specification
    • Make for easier updates
    • Devices w/constraints can implement just modules that are appropriate to that device
    • Some considered part of “CSS Core”
    • www.w3.org/Style/CSS/

CSS 3 Browser Support

  • http://Caniuse.com
    • Can see which devices/browsers a particular element or feature works on

Browser/Vendor Prefixes

  • Browser vendors can create their own specific settings
  • For stuff not yet standardized
  • Once completed, these specific settings may be removed
  • e.g. grid-ms

Rounded Corners

  • Border-radius
    • e.g. on img
    • Can be different values on different corners
  • Can also use oval corner
  • (More the craze a while back)?

Opacity

  • Specify how opaque an element is
  • Ranges from 0 (fully transparent) to 1 (opaque)
  • F12 in browser, can emulate older versions of IE

CSS 3 Selectors

  • Attribute selectors
    • E{A=V]
    • E[A$=V] attributes ends V
    • Etc
  • input: invalid
    • If input element is invalid, apply this style

Web Fonts

  • Introduced in IE 5, was proposed for CSS 2
  • Watch licensing!
  • Watch FOUT (Flash Of Unstyled Text) and FOIT (Flash of Invisible Text)
  • @font-face

Google Fonts

  • Open Source fonts (600+)
  • <link> to fonts
  • Style-able with CSS
  • Free
  • Developers.google.com/fonts/
  • Fonts.google.com

How

  • <link> to Google site with fonts
  • font-family: ‘Lobster Two’
  • Can host fonts on your own CDN
    • Download the fonts
    • Convert to WOFF or WOFF2
  • Be aware, Google doesn’t display anything while font loading

Animations

  • Property changes in CSS values using keyframes
    • @keyframes AnimName
    • Specify percentages
      • “from” – 0%
      • “to” – 100%

2D Transforms

  • Animations using CSS transforms
  • Matrix translations
  • Transforms are render transforms, independent of layout

Media Queries

  • Media type and 1 or more expressions that check “media features”
  • For responsive design
  • Use
    • Width
    • Height
    • Color
    • Resolution
    • Orientation
  • E.g. Bootstrap
  • Question: Can I get <div> size?
    • No
    • But might be 3rd party package

Other CSS 3 Techniques

  • Box Shadows
  • Gradients
  • Text Shadows
  • Multiple Backgrounds Images
  • Word Wrap

RB: Note that we’re finally getting stuff like gradients and rounded corners at a time when de

Modernizr and Fallbacks

  • Modernizr – detects features in browser
    • Can do alternate method for some feature, e.g. use jQuery for opacity

General CSS Tips

  • Expirations
  • Rt-click, Inspect, Styles tab shows everything applied to an element
    • Can play around with different styles right in the browser

Sass

  • Syntactically Awesome Stylesheets, 2006
  •  CSS preprocessor
  • Sass in designer, compile step, converted to CSS
  • Two syntaxes
    • SCSS (Sassy CSS)  [this is the one you want]

Why Sass?

  • Provides programming concepts for CSS
  • Better organization
  • Reduced maintenance
  • Less vs. Sass
    • Very similar

Tools

  • Ruby – Sass gem
  • VS 2015
    • Web Compiler extension
    • Grunt
    • Gulp
  • Sassmeister.com
    • Experiment

Conventions

  • Myfile-gen.css (output file)

Major Features in Sass

  • Variables
  • _partials, @import
  • Mixins
  • Inheritance
  • Nested Rules and Properties
  • Functions

Variables

  • Avoid “Magic Values” (DRY)
  • $variableName: value;
    • e.g. $primaryColor: Red

_partials

  • Create files to organize SASS and styles in separate files
  • e.g. _variables.scss
  • Then @import “_variables.cscc”;
  • Then everything built into single big .css file
  • So HTML looks the same

Mixins

  • Set of declaratoins that can be reused
  • @mixin to define, @include to reuse
  • Parameters
    • Default values
    • Variable number
  • @content – pass declaration blocks

NB: Should still always look at your CSS to verify what’s getting generates

  • e.g. pay attention to size

Style Inheritance

  • Create classes that inherit from other classes
  • @extend .myBaseClass
  • Compiler will set up style for common stuff and apply to correct elements

Nested Rules and Properties

  • Just syntactic difference, embedding style in another style
  • Nested property
    • border-width generated by border: {width

Control Directives and Expressions

  • @if, @else
    • E.g. check $debugBuild
    • And set $imagePath to correct server
    • NB: for absolute URIs, start with just “//”
  • @for loop
  • @each statement to iterate through a list of items
    • CSS explodes into individual styles

Miscellaneous

  • Debug statements
  • Comments – can disappear or stay in CSS
  • Always modify .scss files, not .css files
  • Check CSS output CSS periodically
  • Minify

Frameworks/Mixin Libraries

  • Bourbon
    • Whole bunch of pre-written scss stuff
    • Only outputs to CSS what you use
  • Compass
  • Bootstrap
    • Switching to Sass/Scss

T-SQL – Where Is a Table Used?

Another “put it in my blog so I don’t lose it” T-SQL nugget.

Nicer than just a raw text-based search, a where-used search for a named table.

select distinct [Found In] = sp.Name, sp.type_desc
  from sys.objects o inner join sys.sql_expression_dependencies  sd on o.object_id = sd.referenced_id
                inner join sys.objects sp on sd.referencing_id = sp.object_id
  where (o.name = 'Cow') 
  order by sp.Name

Handy T-SQL Merge

Inserts records into target that aren’t in source, deletes records that are no longer in source.

MERGE CowTarget AS t
USING CowSource AS s
ON (t.TargetCowId = s.CowId)
WHEN NOT MATCHED BY TARGET THEN
    INSERT (TargetCowId, TargetName, TargetMotto)
	VALUES (CowId, Name, Motto)
WHEN NOT MATCHED BY SOURCE THEN DELETE;

My Git Cheat Sheet

I prefer to use git from the command line, rather than using a GUI-based tool.  Here are my favorite or most-used git commands.

Tools

  • Command line git – posh-git
  • Comes with Git for Windows – https://git-scm.com/download/win
  • After install, you have shortcut to “Git Shell”
  • NOTE: Tab for auto-completion in Git Shell

 

Setting up Kdiff3 as diff/merge tool

  • Download/install Kdiff3
  • Add to PATH: c:\Program Files\Kdiff3 (or equiv)
  • git config –global merge.tool kdiff3
  • git config –global diff.tool kdiff3

 

Cloning – get local of repo

 

List branches (local and remote)

  • git branch -a

 

Create new local branch

  • git branch 170502
  • git checkout -b 170502 master    // create new branch off specified branch (master)

 

Switch to new branch

  • git checkout 170502

 

Switch back to master (or another branch)

  • git checkout master

 

Push new branch (simple git push will suggest this)

  • git push –set-upstream origin 170502

 

Revert local changes (single file or all files)

  • git checkout HEAD <filename>
  • git checkout HEAD .

 

Doing diffs of local changes

  • git difftool <filename>              // diff local file to remote (same branch, if not staged)
  • git difftool master <branchname> — <filename>   // diff file across branches, remote copies
  • git difftool master — <filename>   // diff local file to different branch (remote)

 

Sequence for commit/push changes

  • git add .    — add file to list of things to be committed
  • git commit -m “some message” .    — commit to local repo
  • git push     — push to remote

To unstage a file (reverse of git add)

  • git reset <filename>

To merge changes back to master, from branch

  • On github.com site, find branch, do New Pull Request
    • This just shows all diffs, allows comments, but doesn’t yet create the pull request
  • Add title and comments
  • Review commits and changes
  • Create Pull Request
  • Verify no conflicts
  • (Can leave Pull Request sitting out there at this point)
  • Merge Pull Request
  • Can delete branch after merge (Trash icon in branch list)

 

Update local, after merge

  • git checkout master
    • Back to branch that you merged into
  • git pull
    • Update from remote (you get the just-merged changes)
  • git remote update origin –prune
    • Update local list of remote branches
  • git branch -d 170502
    • Delete local branch that you just merged

 

Switching to new branch that already exists on remote

  • git remote update origin –prune
    • Update local list of remotes
  • git checkout newbranch
    • Automatically set to track remote

 

Abandon a branch

  • git checkout master
    • Get off the branch
  • git branch -d mybranch
    • Delete local
  • Delete branch on github web site

 

Merging

  • Assume you’re working on a feature branch and want to periodically merge from master
  • git checkout master
  • git pull
  • git checkout mybranch
  • git merge master

Merge remote into local changes

  • E.g. if you’re on branch mybranch, have local changes not pushed, and there are other changes already pushed onto mybranch from elsewhere
  • git checkout mybranch
  • git fetch
    • Update remote tracking branches locally, e.g. origin/mybranch
  • git merge origin/mybranch
    • Merge from remote (tracking branch) into your branch
  • Resolve any merge conflicts

If you have conflict

  • git mergetool filewithconflict
  • Resolve in mergetool (e.g. KDiff3) and save
  • Note that merged file is now included in list of files to be committed and that .orig file exists as untracked file
  • Remove untracked orig files
    • Option 1
      • git clean -n
      • git clean -f
    • Option 2 – just delete files

 

See also: Git Cheat Sheat by Tower

 

 

 

WPF Commands – A Pattern that Works

Using commands in WPF can be frustrating. Having so many different ways to do commanding in WPF leads to some confusion. There are multiple ways to wire up commands that are equally correct, though some patterns are cleaner than others.

Below is a pattern that works.

Create static property to expose command

Making the command a property rather than a variable makes the syntax in XAML a bit cleaner.  Using RoutedUICommand allows setting the text for the command. This is useful if you want to use the command in multiple places.

Here’s code for creating the command, e.g. in a MainWindow class:

private static RoutedUICommand _pressMeCommand = new RoutedUICommand("Press Me", "PressMe", typeof(MainWindow));
public static RoutedUICommand PressMeCommand
{
    get { return _pressMeCommand; }
}

Add CanExecute and Executed handlers

Add code in code-behind (or ViewModel) for handlers. Below, we have private handlers in MainWindow. In this example, we have a boolean indicating whether the command can be executed. You could also put the logic directly in CanExecute, if it’s not used elsewhere.

private void PressMe_CanExecute(object sender, CanExecuteRoutedEventArgs e)
{
    e.CanExecute = CowboyCanTalk;
}

private void PressMe_Executed(object sender, ExecutedRoutedEventArgs e)
{
    MessageBox.Show("Howdy howdy I'm a cowboy");
}

Set up command binding

You can do this from code or XAML.  Doing it in XAML is perhaps a bit cleaner. Below is an example of binding the above command to its handlers from XAML (for MainWindow). Note that we can just use command name instead of {x:Static syntax because we made the command a property.

<Window.CommandBindings>
    <CommandBinding Command="local:MainWindow.PressMeCommand"
                    CanExecute="PressMe_CanExecute"
                    Executed="PressMe_Executed"/>
</Window.CommandBindings>

 

Wire up a button

Below, we wire a button up to the command.  You could just set Content to text directly. This would be fine to do and is simpler than what we’re doing below. But putting the text into the RoutedUICommand is helpful if you use the command in more than one place (since you specify text in just one place—in the command). And menu items automatically pick up the command text. (NOTE: If you used this pattern for Content regularly, you could just put it in a style to hide the complexity).

<Button Command="local:MainWindow.PressMeCommand"
        Content="{Binding RelativeSource={RelativeSource Self}, Path=Command.Text}" />

Wire up a menu item

Below, we wire up a menu item in a context menu to the same command. Note that here we do have to use the x:Static syntax. But also note that we don’t need to specify text for the menu item—it comes from the RoutedUICommand.

<Window.ContextMenu>
    <ContextMenu>
        <MenuItem Command="{x:Static local:MainWindow.PressMeCommand}" />
    </ContextMenu>
</Window.ContextMenu>

Bottom line

  • We used RoutedUICommand so that we could attach text to the command and avoid specifying text for both button and menu item
  • Binding in XAML is a (tiny) bit less code than doing it in code-behind

Other thoughts

  • You could also use a DelegateCommand pattern, passing in lambdas for both CanExecute and Executed. Use of a DelegateCommand is common in MVVM architectures.

 

 

Object Disposal and BackgroundWorker Object in .NET

Here are a few notes about best practices related to:

  • IDisposable and an object’s Dispose() method
  • The using statement
  • Disposal of BackgroundWorker objects

(NOTE: BackgroundWorker object is no longer the preferred mechanism for doing work on a background thread in C#, given that the language supports task-basked asynchrony with the async/await constructs. However, many legacy applications still make use of the BackgroundWorker class).

Q: What’s the goal of the using statement and IDisposable (Dispose) interface?

A: (short) To tell an object when it can clean up unmanaged resources that it might be hanging onto

A: (longer)

  • .NET code can make use of managed resources (e.g. instantiate another .NET object) or unmanaged resources (e.g. open a file to read from it)
  • Managed resources are released by the garbage collector (GC) automatically
    • Note that this is non-deterministic, i.e. you can’t predict when an object will be GC’d
  • To release an unmanaged resource, code typically follows this pattern:
    • Release resource in finalizer  (~ syntax).  Finalizer called during GC, so unmanaged resource is then released when object is being GC’d
    • Optionally, can support IDisposable (Dispose method)
      • Client calls Dispose before object is GC’d to released unmanaged resource earlier than normal GC
      • Allows for deterministic destruction
      • using statement automates calling of Dispose on an object
    • Classes implementing Dispose will still get GC’d normally at a later time
      • If Dispose was called first, code typically tells GC not to call its finalizer, since it’s already done stuff done by the finalizer (GC.SuppressFinalization)
      • If client failed to call Dispose, finalizer runs normally, so unmanaged resources then get cleaned up before GC
    • Objects with finalizers take a little bit longer to be GC’d
    • Here’s how IDispose is typically implemented – http://csharp.2000things.com/2011/10/11/430-a-dispose-pattern-example/

 

Q: When should I use the using statement?

A: Typically, you should use the using statement to invoke Dispose on any object that implements IDisposable

 

Q: What happens if I don’t call Dispose or use the using statement?

A: (short) Unmanaged resources are (typically) released a bit later than they otherwise would be

A: (longer)

  • If you don’t call Dispose on an object that implements IDisposable, it typically hangs onto unmanaged resources until it is GC’d and then releases them
  • Depending on the type of resource, the first object may block access to the resource until it’s released
  • Failing to use using (or call Dispose) typically doesn’t lead to a memory leak. Rather, it just means that resources are released a bit later

 

Q: Should I use a using statement for a BackgroundWorker object?

A: (short) Yes, since BackgroundWorker has Dispose method (although calling Dispose doesn’t actually do anything)

A: (longer)

  • It’s okay to use using on BackgroundWorker, since it does implement IDisposable
  • BackgroundWorker, however, doesn’t actually do anything when Dispose is called.  Its parent class, Component, detaches from its ISite container, but this is only relevant in Windows Forms.
  • Calling Dispose does suppress finalization, which means that the BackgroundWorker will be GC’d a little bit sooner.  This is reason enough to use using on the BackgroundWorker.
  • The using statement for a BackgroundWorker does nothing with the BackgroundWorker’s event handlers (i.e. it doesn’t detach any event handlers)

 

Q: Should I detach event handlers in the handler for RunWorkerCompleted?

A: (short) No, you (typically) don’t need to explicitly detach event handlers for a BackgroundWorker

A: (longer)

  • In .NET, if two objects reference each other, but no other “root” object references either of them, they do both get garbage collected
  • If we have a WPF form that has a class-level reference to a BackgroundWorker
    • Assume that we instantiate the BackgroundWorker when user does something on the form and attach handlers (methods in form) to that instance
    • Form now has ref to BackgroundWorker (class-level ref) and BW has ref to form (via the handlers)
    • When form closes, if main application no longer has a reference to the form, both the form and the BackgroundWorker will be properly garbage collected even though they reference each other
  • You do need to detach handlers if you have a BackgroundWorker that is meant to live longer than the object that owns the handlers
    • g. If we had an application-level BackgroundWorker and forms that attached handlers to its DoWork or RunWorkerCompleted events.  If the BW was meant to live after the form closes, you’d want to have the form detach its handlers when it closed.

 

 

String Properties in ADO.NET Entity Data Model

(NOTE: This is Model First development in Entity Framework–start with a data model in the Entity Framework Designer and then generate database schema from the model).

When you add a scalar property of type string to an entity data model, you can fill in the values of several other properties within Visual Studio.

  • Fixed Length – (None), True, False
  • Max Length – (None), value, Max
  • Unicode – (None), True, False

stringprop

What does a value of (None) mean for these properties (when generating SQL Server Database from model)?

  • Fixed Length – (None) equivalent to False
  • Max Length – (None) equivalent to Max
  • Unicode – (None) equivalent to True

Here’s how choices for these properties map to SQL Server data types:

  • Fixed=True, Max=20 – nchar(20)
  • Fixed=(None), Max=20 – nvarchar(20)
  • Fixed=False, Max=20 – nvarchar(20)
  • Fixed=(None), Max=(None) – nvarchar(max)
  • Fixed=True, Max=(None) – nchar(4000)
  • Fixed=False, Max=(None) – nvarchar(max)
  • Fixed=(None), Max=Max – nvarchar(max)
  • Fixed=True, Max=Max – nchar(4000)
  • Fixed=False, Max=Max – nvarchar(max)
  • Fixed=(None), Max=20, Unicode=True – nvarchar(20)
  • Fixed=(None), Max=20, Unicode=False – varchar(20)

This means that if you just pick String as the type and set nothing else, you’ll get nvarchar(max) as the SQL Server type.

NOTE: The value of 4,000 is being used because the maximum length of a fixed length string in SQL Server is 4,000 characters (8,000 bytes when encoded as UTF-16).

 

Count Total Rows while Paging and Avoid Two Queries

This is well covered in many other places, but I’m posting on my own blog so that I can more easily find this trick.

You can do paging in SQL Server 2012 and later using the OFFSET-FETCH syntax. This is useful for returning a single page’s worth of records from a larger (sorted) set of records. To count the total number of records, however, you’d normally do a second query. The T-SQL below shows how to do a total count as part of the query that returns the page in question. The count is repeated for every record, but it avoids having to do two queries.

select FirstName, LastName, COUNT(*) over () as TotalCount
  from Person
  where LastName like 'mc%'
  order by LastName, FirstName
  offset 500 rows
  fetch next 100 rows only

Here’s what the output looks like, returning records 501-600 out of 1,968 records having a last name that starts with “Mc”.

161109-1

SQL Server -Dump Info on Keys

Here’s a short stored procedure that could be useful when trying to understand a SQL Server database.

Let’s say that you have a simple schema like the one below. Person has GroupID as a foreign key (Group.GroupID), meaning that a person belongs to a single group. A person can also have multiple contact records, so PersonContact has a foreign key indicating which person the contact is for.

Keys

In a simple schema like this, you can quickly see the two relationships. You can also see some of this by looking at the Person table in Management Studio, under Columns and Keys. In the diagram below, we see that GroupID is a foreign key and, based on the naming convention, we infer that the corresponding primary key is in the Group table.

MgmtStudio

This notation relies on the naming convention used for the FK_Person_Group relationship. Also, while looking at the Person table, we’re unable to see which tables might contain foreign keys that refer to the primary key in the Person table.

Below is a simple stored procedure that dumps out all relationships that a specified table participates in. That is, given a table, it tells you:

  • All foreign keys that reference the specified table’s primary key
  • All foreign keys in the specified table and where the corresponding primary key is located
-- Given specified table, show all PK/FK relationships.  Show:
--   1) All foreign keys that reference this table's primary key
--   2) All foreign keys in this table and which table contains primary key
create procedure uKeys
(
	@TableName varchar(255)
)
as
begin
select tfk.name as PKTable, cfk.name as PKColumn, tpk.name as FKTable, cpk.name as FKColumn from sys.foreign_key_columns fk
  inner join sys.tables as tpk on fk.parent_object_id=tpk.object_id
  inner join sys.columns as cpk on fk.parent_object_id=cpk.object_id and fk.parent_column_id=cpk.column_id
  inner join sys.tables as tfk on fk.referenced_object_id=tfk.object_id
  inner join sys.columns as cfk on fk.referenced_object_id=cfk.object_id and fk.referenced_column_id=cfk.column_id
  where (tpk.name = @TableName) or (tfk.name = @TableName)
  order by PKTable, PKColumn, FKTable, FKColumn
end;

With this stored proc in place, you can now do the following in a query window, to ask about the keys for the Person table.

ukeys 'Person'

You’ll get some nice output that shows information about all of the relationships for the Person table.

Output

This is a simple example, but this procedure is considerably more useful when you’re dealing with very large databases, where you have a large number of tables and it’s not easy to see all of the relationships for a particular table.

Addendum: There’s already a built-in stored procedure in SQL Server that does basically the same thing (though perhaps in not as clean a format). sp_helpconstraint will dump out the same sort of information, as shown below.

sphelpconstraint

That Conference 2016 – From Inception to Production: A Continuous Delivery Story

That Conference 2016, Kalahari Resort, Lake Delton, WI
From Inception to Production: A Continuous Delivery Story
Ian Randall

Day 3, 10 Aug 2016

Disclaimer: This post contains my own thoughts and notes based on attending That Conference 2016 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of the speakers.

Executive Summary

  • Ian shares best practices of his development group at Pushpay, from inception to deployment

Tell a continuous delivery story in context

  • Pushpay company, SaaS business
  • ACMR Growth – committed monthly revenue stream
  • to $10 million in 3 quarters (5 considered fast)
  • Everyone at Pushpay paid to deliver value to business
    • Code not yet running on production server has no value

Hierarchy

  • Tools (top)
  • People, practices
  • Just culture & blameless postmortems
    • Bottom layer, most important

Our journey begins

  • Idea

Why?

  • Shared vision over the value to the business
  • Talk about why to build it
    • Better than talking about what

Who?

  • Who has conversation?
    • Product
    • QA – must involve QA in initial discussion
      • Can’t just test quality at end
      • “Bake quality in”
      • Build with tester (pairing)
    • Dev

Building a feature

  • Dev – how will I build this thing?
  • QA – how will I break this thing?
    • Very valuable insight
    • “Nobody will ever do that”
    • QA tester writes out what tests should be
    • Then Dev writes unit tests based on this
    • Quality Assistant (not Assurance)

Building a larger feature

  • Long-lived feature branch (git)
    • Delta gets too big
      • Smaller deltas are lower risk
    • No feedback – from user
    • (good) dry code
  • Feature switches (toggles)
    • Small deltas
    • Regular feedback
    • (bad) technical debt
      • Some code duplicatoin
      • But you have to go back later and yank out old code

Pushpay terminology

  • Delta – diff between production server & head of master branch
    • Code no yet running in production
  • Want to keep delta small and contained
  • Shipping in tinier increments, easier to figure out what the cause of the problem is

Feature Switches

  • Configuration per environment
    • Features.config
    • Features.Production.config
    • Features.Sandbox.config

Feature Switches

  • Url manipulation to toggle switches on/off
  • Deliver daily increments of (non-running) code
  • Light up a slice of feature
  • Measure – statsd
  • Re-think road-map to compmlete feature

Womm

  • Works on My Machine
  • Context switching after QA person finds problem
  • Pushpay, turned around
    • Must work on your machine
    • Tester decides this
    • Hand laptop to tester, let them test on the dev machine
    • Best value for company from dev point of view–watch tester break it right in front of them
    • Shortens the DEV/QA cycle
    • Pair testing

Code review

  • Every line of code gets reviewed
  • Code must reviewed and wommed before merging
  • “Roll forwards to victory”

Code Review

  • Do
    • Validate approach
    • Performance, security, operability
    • Cohesion, coupling
    • Be honest
  • Don’t
    • Be rude – e.g. “dude that’s gross”
      • Better – tell coder how you would have done it?
    • Seriously, don’t be rude
    • Sweat the small stuff, like bracing, spaces
      • This stuff is not important

Cross-Pollination

  • Someone else does it all again!
  • Pollination – not necessarily from your cell
  • Might not fully understand the context of your feature

4 Cotinuouses

(1) Continuous Integration

  • Source control
    • PR branch
    • Pushed early, for discussion
  • Build & Test
    • TeamCity does builds, using nUnit for unit testing
    • Integration testing – anything that crosses a boundary

CI – Source Control

  • PR-based workflow
  • Review happens in the PR
  • Everything reviewed

CI – build and test

  • PR Branch: Build, unit test and integration test
  • Merge into master – build, unit test, integration and acceptance test
    • Goes to QA – then rolled back or pushed to production
    • Human decision to push to production
    • Acceptance tests in Selenium – a bit brittle, but have some value
    • Create static Model and push to View, then test
      • If it breaks there, it’s not because of back-end
      • Verified that your site is intact, visually
      • Renders view against snapshot – if diff, either a bug or you accept as new snapshot

(2) Continuous Deployment

  • Automatically build, package and deploy to QA
    • Octopus deploy (great tool)
  • Manually promote package to production
    • One button click (because of Octopus)

(3) Continuous Delivery

  • Operability
  • Value

CD – Operability

  • Exception logging
  • App logging (Log4Net)
  • App metrics (statsd)
    • Measure absolutely everything
  • Incident

CD – Value

  • Add incremental bits of value to the product
    • Need to think – is there maybe value in shipping a portion of the feature?
  • Measuring the effectiveness

(4) Continuous Improvement

  • Actively seeking out opportunities to improve
    • Fix broken windows
    • Leave codebase in better state than when you found it
    • Improve the process

Bots

  • Shipbot, Beebot, Salesbot
    • Many, many, many more
  • @C3PR in action
    • Catalog of little commands
    • People joining PRs together

Just Culture

  • sidney dekker
  • Retributive culture
    • Clarity between acceptable and unacceptable
  • Restorative culture / model
    • Focuses on learning from what went wrong
    • Safe to fail

Fear of breaking things will paralyze an organization

Toyota’s Five Whys

  • Keep asking why until you get to root cause of problem
  • Doesn’t work for Pushpay
    • No single thing that is root cause
    • And often turns into “who”

Blameless Post-Mortems

  • Talk about how to stop the thing from happening again
  • When?
    • When there is an opportunity
    • Often, after break to production
    • Or even when something brings QA server down
  • How?
    • If we had a meeting, the loudest person in the room would do the most talking
    • So we do this asynchronously in a Wiki
    • Coordinated in slack channel #morgue
    • Co-ordinated with person closest to the incident
  • What?
    • Four sections in report
      • Scenario and impact
      • Timeline – write in real-time
      • Discussion
      • Mitigation – make sure this type of thing won’t happen again
        • Actionable ticket in Jira, highest priority possible
        • Slipping feature is better than having the incident happen again

Fault

  • Easy for manager to say to staff–when stuff happens, it’s not your fault
  • Much harder to go to CEO and say that stuff just happens
    • Board reads every post-mortem

That Conference 2016 – Clean Architecture: Patterns, Practices, and Principles

That Conference 2016, Kalahari Resort, Lake Delton, WI
Clean Architecture: Patterns, Practices, and Principles
Matthew Renze – @matthewrenze

Day 3, 10 Aug 2016

Disclaimer: This post contains my own thoughts and notes based on attending That Conference 2016 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of the speakers.

Executive Summary

  • A whirlwind overview of domain-centric architecture and discussion of its merits
  • Overview of CQRS architectures (Command and Query Responsibility Segregation)
  • Brief mention of microservices architectures

Story – Tale of two architecures

  • John/Jane, same expertise
  • Hire them to design two buildings, same purpose
  • John
    • Superstar, creates the architecture, hands off to builder
  • Jane
    • Starts by talking to inhabitants and crews
    • Then involved during construction, helping implement
    • Tweaks plan as necessary
  • Captures essence of difference between clean architecture and traditional

About Me

  • Consultant
  • Pluralsight
  • Msft MVP

Focus

  • Enterprise architecture
  • Line-of-business applications
  • Modern equivalent of 3-layer
  • Will talk about 6 key ideas

What is software architecture?

  • High-level – higher level than the code
  • Structure
  • Layers – vertical partitions of system
  • Components – sub-divisions of layers, or horizontal
  • Relationships – how wired

Levels of architectural abstraction

  • System
  • Sub-systems
  • Layers
  • Components
  • Classes
  • Data and methods

Messy vs. Clean Architecture

  • Bad – like spaghetti
  • Good – lasagna
    • Nice layers

What is Bad Architecture?

  • Complex – accidental
  • Inconsistent
  • Incoherent – things don’t fit
  • Rigid – not adaptible
  • Brittle
  • Untestable
  • Unmaintainable

Clean Architecture

  • Simple
  • Understandable – easy to reason about
  • Flexible
  • Emergent
  • Testable
  • Maintainable – easier to maintain over lifetime

Clean architecture

  • Architecture that is designed for inhabitants of the architecture
    • Not for architect
    • Or for machine
  • Start designing architecture for inhabitants
    • This is the primary concern
    • Everything else is secondary

Why is clean architecture important?

  • Cost/benefit
  • Minimize cost to maintain
    • Good to optimize for maintainability
    • Since most cost is spent during maintenance
  • Maximize business value

Decisions

  • Context is king
    • Answer always depends on context
  • All decisions are a tradeoff
  • Use your best judgment

Domain-Centric Architecture

  • Heliocentric was more elegant model of the solar system

Classic 3-layer architecture

  • Database at central, then data access, business logic

Domain-centric

  • Domain at center
  • Application around that
  • Then Application
  • Database hung off the side

Uncle Bob

  • Most important thing is that the architecture is usable

Essential vs. Detail (e.g. for house)

  • Space is essential
  • Usability is essential
  • Building material is a detail
  • Ornamentation is a detail

Essential vs. Detail (Clean architecture)

  • Domain is essential
    • Domain: series of object models at center
    • Mirrors the mental models of the domain
  • Use cases are essential
    • And users
  • Presentation is a detail
  • Persistence is a detail
    • Can store in relational DB, NoSql, doc DB, etc.

Back to domain-centric diagram

  • What is essential is at the center of the diagram
  • Domain essential
    • At center
    • Everything points towards the domain

Hexagonal architecture

  • Application layer (domain) at center
  • Adapting to presentation
  • Adapting to do input
  • Adapting to persistence

Onion Architecture

  • Domain Model at center
  • Services
  • UI at outer

The Clean Architecture (Uncle Bob’s)

  • Entities at center
  • Use Cases
  • Controllers
  • Outside: Devices, Web, UI, External Interfaces

It’s all the same thing

  • All three of these are fundamentally the same
  • Domain at the center

Why use domain-centric architecture?

  • Pros
    • Focus on essential
    • Less coupling to details
    • Necessary for DDD
  • Cons
    • Change is difficult
    • Requires extra thought
    • Initial higher cost

Application Layer

  • Might embed use cases as high-level logic

What are layers?

  • Levels of abstraction
  • Single-Responsibility
  • Developer roles/skills
  • Multiple implementations

Classic 3-layer architecture

  • Database at bottom
  • Data Access
  • Business Logic
  • UI
  • Users
  • Top layers dependent on layer below it

Modern 4-layer architecture

  • Presentation at top
  • Application below that
  • Domain layer
    • Only domain logic
  • Persistence below application
    • Access to DB
  • Infrastructure below application
    • Access to OS

Application layer

  • Implements use cases
  • High-level application logic
  • Knows about domain layer but not persistence or infrastructure
  • No knowledge of upper layers

Layer Dependencies

  • Dependency inversion
  • Inversion of control
    • Details depend on abstractions
  • Independent deployability
  • Flexibility and maintainability
  • Flow of control
    • Presentation calls Application
    • Persistance calls Application

Example

  • Presentation layer w/controller
  • Application
    • Command dep on IDatabaseContext, IInventoryClient
    • Application–what the users are doing with the system
      • E.g. ICreateSalesCommand
  • Dep on Sale in Domain
    • Domain–objects in system
      • e.g. Sale object
  • Interface in cross-cutting concerns

Why use an application layer?

  • Pros
    • Focus is on use cases
    • Easy to understand
    • Follows DIP – Dependency Inversion Principle
  • Cons
    • Additional cost
    • Requires extra thought
    • What is application logic vs. domain logic ?
    • IoC is counter-intuitive

Commands and Queries

  • Keep separated
  • Command
    • Do something
    • Modifies state
    • Should not return value
  • Queries
    • Answer question
    • Do not modify state
    • Always returns value
  • Why?
    • Avoid side-effects
  • Sometimes odd, exceptions
    • E.g. create record, return it

CQRS Architectures

  • Application layer, separate into two parts
  • Left
    • Queries
    • Data Access
  • Right
    • Commands
    • Domain
    • Persistence
  • Database at bottom
  • Data flow
    • Down through commands
    • Up through queries
  • CQRS is domain architecture done in sensible way

CQRS Type 1 – Single Database

  • Single database, typically NoSQL
  • Persistence layer might be something like EF
  • Queries – stored proc, linq to sql

CQRS Type 2 – Read/Write Databases

  • Command stack leads to Write Database
    • 3NF
  • Read database
    • 1NF, optimized for reads
  • Use “eventually consistent” model to push data across from write to read
  • Orders of magnitude performance improvement over single database
    • Because we mostly do reads, not writes

CQRS Type 3 – Event Sourcing

  • Command stack leads to Event Store
  • Read database under query stack
  • Replay events to get current state of entity (from deltas)
  • State modifications pushed over to Read Database

Type 3 points

  • Complete audit trail
  • Point-in-time reconstruction
  • Replay events
  • Rebuild production database
    • Just by replaying events
  • Only worth doing if you need these features

Why use CQRS?

  • Pros
    • More efficient design
    • Simpler within each stack
      • At expense of inconsistency across stacks
    • Optimized performance
      • Of each side
  • Cons
    • Inconsistent across stacks
      • More complexity
    • Type 2 is more complex
      • Adds consistency model
    • Type 3 might be overkill
      • If you don’t get business value from these features

Functional organization

  • Screaming architecture
    • Architecture should scream the intent of the system
  • Organize architecture around use cases
  • Uncle Bob

Building metaphor

  • House, typical
    • Intent – residential
    • Rooms embody uses–bedroom, kitchen, living room
    • Architecture shows use
  • Look at list of components, rather than architectural diagram
    • Can’t infer intent

Software intent

  • Could organize folders by components (models, views, controllers)
  • Or organize by things
    • Customers, products, vendors

This vs. that

  • Traditional
    • Content, Controllers, Models, Scripts
  • Functional
    • Customer
    • Product

So what?

  • Functional cohesion is more efficient
    • Because it better models how we think about the software

Why use functional organization

  • Pros
    • Spatial locality
    • Easy to navigate
    • Avoid vendor lock-in
  • Cons
    • Lose framework conventions
    • Lose automatic scaffolding
    • Categorical is easier at first

Microservices

Components

  • Horizontal – UI, business, data access
  • Vertical – Sales, Support, Inventory
  • UI presents all vertical pieces as single unified UI

Problem Domain

  • Sales and Support

Single domain model

  • Traditionally, create single domain model
  • E.g. Product on both sides, so we have single Product entity
  • Employee – sales or service
  • Problem
    • Becomes exponentially more difficult

Bounded contexts

  • Sales and support as boundaries
  • Some entities in just Sales
    • Some in Support
    • Some in overlap
  • Then pull apart and have separate models for each context

Microservice architectures

  • Subdivide system
    • Communicate with each other via lightweight mechanisms
  • Bounded contexts
    • Can have one agile team per bounded context
    • Only have to know about one bounded context
  • Small teams
  • Qualities
    • Independent
    • Similar to SOA
    • Independently deploy and scale each microservice
  • Size of microservices
    • Ongoing debate
    • Bounding context maps to microservice
    • So persistence model matches context matches service
    • “Goldilocks” point

Why use microservices?

  • Pros
    • Less cost for large domains
    • Smaller teams
    • Independence
      • Don’t need to know what other teams are doing
  • Cons
    • Only for large domains
      • Overkill for small domains
      • Lot of overhead cost for microservice
    • Higher up-front cost
      • Fault tolerance, latency, load balancing
      • Might start with single system, split when you have multiple bounded contexts
      • – Distributed system costs

Code Demo

  • Solution architecture for simple web site
  • ASP.NET MVC 5
  • Top-level folders
    • Application, Common, Domain, Infrastruccture, Peristence, Presentation
  • Presentation, Sales folder
    • sub-folders Models, Services, views
  • Application layer
    • Sales | Commands, Queries
    • Under Commands, folder for each command
  • Domain layer
    • Fairly thin object model
    • But a small amount of business logic in the objects

Where to go next

  • Patterns of Enterprise Application Architecture – Fowler
  • Bliki has updated stuff
  • Clean Code – Uncle Bob
    • cleancoders.com
    • Great
  • Domain-Driven Design – Evans
    • For complex domains
  • Greg Young, Udi Dahan
    • CQRS event sourcing
  • matthewrenze.com
    • Pluralsight courses as well

That Conference 2016 – Lean UX

That Conference 2016, Kalahari Resort, Lake Delton, WI
Not Just Arts & Crafts: A Developer’s Guide to Incorporating Lean UX Practices into the Devleopment Process
Rachel Krause

Day 3, 10 Aug 2016

Disclaimer: This post contains my own thoughts and notes based on attending That Conference 2016 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of the speakers.

Executive Summary

  • An argument for a lean/agile approach to UX design

Contact Info

@rachelbabiak // #thatconference
centare
– On site, on SCRUM teams
– Consulting, be UX person on client team
– UX coaching

UX / UI

  • Not just putting a pretty picture on something
  • UX is the experience that the user goes through
  • What will help the user in the best way possible

Example: Dominoe’s Pizza Tracker

  • Visually, see where pizza order is in the process
  • Distracts the user a bit while they’re waiting
  • Really helped Dominoe’s competitively

UX Schools of Thought

  • Genius Designer
  • Mindset of experimentation

Genius Designer

  • Research, work done up front
  • Outside of development team
  • Developers have no say
  • “Best design possible”
  • If something goes wrong, blame the user

Mindset of experimentation

  • No matter how brilliant you are, you can’t figure it out
  • Release early/often

Lean UX

  • Book: Jeff Gothelf
  • Focus on experience being designed
  • Move us forward

Focusing on 3 principles from book today

  • Accept failure
    • It’s okay to be wrong
  • Reduce waste
    • Get rid of high fidelity mockups
  • Collaborate
    • Everyone gets a say in the design
    • Gets everyone on the same page

Trying to fit Lean UX into Agile

Explain via cake metaphor

  • Big Design
    • Pretty picture of cake–you’ll get this at end of project
    • During development, you get cake that looks similar
    • Pivot when necessary, make changes
  • Lean UX
    • Start with wish list, rather than picture
    • Everything about the list can and will change
    • Scary to think of UX as just a list of features
    • Start with something small, e.g. cupcake
      • Everything there, but on smaller scale
      • Will release to users
    • Users give feedback
    • Then come out with something bigger
      • More features
    • Eventually get to something bigger, but matching what users need
    • Or cupcake might be all that the users need

Example: Pokemon Go

  • Added feature after field testing
    • Take screenshot

Best UIs are designed by the entire team

  • Different roles bring different perspectives

Example: A website

  • QA might add opinion about usability
  • Dev weighs in
  • Product Manager weighs in
  • Design changes as people offer thoughts

No complicated software necessary

  • Nothing that you have to pay money for
  • Need to use something that will work for entire team
    • E.g. Just a whiteboard
  • No time for high value mockups
  • Don’t get too attached to mockups

Personas

  • Traditionally–heavily researched, validating
    • Packaged up and given to stakeholders
    • This takes a lot of time

Proto-Personas

  • Easier, lean
  • Originate from brainstorming sessions
  • Based on stakeholders’ domain expertise
  • Starting point for understanding users
  • Think about role
    • E.g. Administrator, Warehouse Worker, Read-only User

Demo: Proto-Personas

  • Four quadrants
  • Upper left–name and face
    • Gives up empathy for the user
  • Upper right–basic biographical information
    • Who, where coming from, what devices
    • Can be specific, e.g. married with 2 kids
  • Lower left–pain points & needs
    • Frustrations with current solutions / products
  • Lower right–potential solutions
    • Can be subjective, e.g. wants things super easy
  • Do this on whiteboard, can take photo
  • Stakeholders invited in on this process
  • Talk through things and zero in on who you’re building software for
  • If you can validate with real people, your assumptions were correct

Kids Experiment

  • Had kids run through this process, with superheroes

User Journeys

  • Once you have persona, give him a journey
  • Series of steps that user is going to go through when working with software
  • [Persona] wants to [action] because [need] but [friction]
    • James wants to check in an inbound shipment because a railcar just arrived, but he doesn’t have accurate weights yet.

Now map the process

  • Steps in user journey
  • What happens, what the user does
    • Can do the journey for pre-solution or post-solution
  • Needs, activities and expectations
    • At each step
  • Persona’s emotional state
    • Graph between + (delightful) and – ()
    • Give rating for each step in the journey
  • Opportunities for Improvement
    • At each step
    • Don’t worry about scope at this point
  • Can do in Google Slides
    • Something simple

Design Sessions

  • In place of Photoshop stuff
  • Get an idea of the layout
  • On whiteboard
    • User needs, check things off as you address them
    • Have small journey map to see where you’re at
  • Do this collaboratively
  • Why do this
    • We might think we’re all on the same page
    • Once we draw it out, we realize we’re thinking differently
    • Then hash through things and come to the same vision

When?

  • Don’t have to do all of this than every single thing
  • Proto-personas
    • At start of project
    • Or later, if you don’t already have them
  • User Journeys
    • Whenever you don’t have enough information to fill out acceptance criteria
  • Design Sessions
    • When you need to be on the same page
  • Don’t do extra meetings for this
    • Can piggyback on existing Scrum meetings

Example: salesforce ux

  • Wanted to redesign their core web product
  • Had 85 designers and 60 Scrum teams
  • Used Lean UX to revisit assumptions
  • Then collaborated, included everyone on the team
  • Lightweight, just using Sharpie
  • How, with so many people?
    • Regular UX reviews on the calendar
    • Developers in design process early and often

Got a story?

  • Let Rachel know successes and failures

Info

  • @rachelbabiak // #ThatConference
  • linkedin.com/in/rachelkrau
  • speakerdeck.com/rachelkrau

Questions

  • Remote teams?
    • Some tools out there for virtual design session
    • E.g. Peer.in, Google Slides
  • Comment – When documenting, ask why you’re creating the document
  • Comment – Balsamic
    • Cartoony layout keeps people from focusing on the details
    • Rachel: Challenge that, since it’s a paid application
      • Why not just use whiteboard
  • What to during design process
    • Translate some of the needs and pain points into design elements
    • Not a formal process, but doing gut check w/user’s pain points
  • Examples of turning negatives in journeys into positives
    • Often quick wins, e.g. confirmation dialog to avoid unintentional deletes
    • For bigger ones, you talk about how much value is in an improvement
  • How do you validate a design after it’s out there?
    • Identify who your users are, then do beta test
    • Do beta testings with stakeholders and users
    • Watch them walk through the journey, validate
    • If you have no users, you’ll have to make assumptions
    • Might go down hallway and use someone else, e.g. product manager

That Conference 2016 – Daring to Develop with Docker

That Conference 2016, Kalahari Resort, Lake Delton, WI
Daring to Develop with Docker – Philip Nelson, Omni Resources

Day 2, 9 Aug 2016

Disclaimer: This post contains my own thoughts and notes based on attending That Conference 2016 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of the speakers.

Executive Summary

  • Progression from bare metal to containers
  • Demoing Docker

Contact Info

github.com/panmanphil
twitter.com/panmanphil
panmanphil.wordpress.com

Docker and Containerized Apps

History: Bare Metal, VM, Docker

Racks of boxes

  • Manually installed hardware, software
  • Must configure everything
  • Deplyment lightly automated

Virtual machines

  • Software installed servers (guests)
  • Still manually configure
  • Lightly automated deployments
  • This is an improvement
    • Easier to provision hardware, servers, networks
    • Mainly helps IT admins

Devops

  • Intersection of Development, QA, Operations

Devops 1.0

  • Improvement: automated setup for your dev environments
  • E.g. Vagrant, Chef, Puppet, Ansible
    • These tools set up OS
  • Provisioning: setting up everything that you need for running your application
  • Drawbacks, trouble spots
    • Don’t always handle dependencies
    • Not quite complete
    • Took maybe 15-20 mins to set up box
    • Eventually, the host OS gets messy
  • Provisioners only useful at start for clean boxes
    • They don’t delete stuff

Containers

  • Slice VM up into smaller units
  • Automated OS setup, automated application setup, automated deployment
  • Each unit from application perspective, appears to own the OS
    • In reality, units share same kernel

Containers vs. VMs

  • VM
    • Each VM has guest OS, with shared libs and app
  • Container
    • Docker Engine on Host OS
    • Libraries on top, shared libraries across all applications
    • Then multiple instances of App on top of each set of shared libs

Docker Interactions

  • Containers
    • Commit (to image)
    • Run (from image)
  • Images
    • All files, code + binaries
    • Pull to Images from Registry
    • Only pull what’s different
    • Push to Registery after building

Containers – Building

  • Build machine
    • Pulls from git
    • Build app.jar
    • Upload to Docker Hub
  • Docker Hub
    • Run on production machine–docker run <app image>

Containers – Hosting Prior to 8/2016

  • Locally, create VM for running containers in
  • docker-machine create –driver virtualbox thatconference

Containers – Hosting Now

  • Containers run in native hypervisor on OS X or Win 10
  • Can still use docker-machine and separate collections of images, but most won’t do this

Containers – Running

  • You run with an image
  • docker run -d yourregistryname/yourimage
  • Map ports to host ports
  • Use volumes to persist data between different versions of image
    • Docker containers are immutable
    • Can save data that your application is running by using volumes

Containers – Demo

  • docker run -d mongo:latest
  • docker ps
    • What’s running on this box
    • Assigns random name to box
    • More detail with container id
  • docker logs namehere
  • docker inspect namehere
    • .json document that talks about how it was set up
  • Container is just a definition of what should run
  • Stopping
    • docker stop namehere
    • Containers not yet dead
  • docker ps -a
    • Containers stick around
  • Remove permanently
    • docker rm namehere

Run on Windows

  • Docker runs on both Win 10 and OS X
  • bash shell stuff a bit problematic on Windows
  • docker run -it hello
    • .NET Core on Debian
  • File changes not quite working–compiled even after no changes

Link Node.js container to Mongodb from command line

  • Networking established by container, no need to map external ports
    • Injects name of other containers
  • Run Mongo, run node-starter
    • -e MONGODB=mongodb://mongo:27017
    • –link mongo
  • Link will link to the other container

Containers – Volumes

  • Data will stay with running container only when it is defined, running or stopped
    • So if you start/stop the same container, the data will be there
  • But container definition is immutable
    • So data will go away if you remove container and start again
  • Volumes tell docker to manage longer term persistence of data independent of the container
    • As long as one guy is pointing to volume, it will remain
  • Use -v on command line

Demo – Volumes

  • docker run -it -docker-data –volumes-from mongo

Containers – Volumes

  • For working locally on rapidly changing code, can map any folder in container to host volume
  • Use -v command to map volume

Containers – Registry

  • Build image using docker build
    • docker build
  • After image is built, can be launched locally

Containers – Swarm

  • Make cluster out of docker machines and/or host machines
  • Docker swarm is part of base docker release
  • Some competitors, e.g. Amazon ECS
    • ECS “not that wonderful”

Containers – Troubleshooting

  • inspect
  • logs
  • docker ps
  • docker images
  • Log into running container
    • docker exec -t <name> /bin/bash

That Conference 2016 – Introduction to Angular 2.0

That Conference 2016, Kalahari Resort, Lake Delton, WI
Introduction to Angular 2.0 – Jeremy Foster, Developer Evangelist at Microsoft

Day 2, 9 Aug 2016

Disclaimer: This post contains my own thoughts and notes based on attending That Conference 2016 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of the speakers.

Executive Summary

  • Choosing Angular 2
  • A few core concepts of Angular
  • How to create a site and add a component
  • Streams

Contact Info

@codefoster
codefoster.com/thatdeck
github.com/codefoster/waterbug-ui

Choosing Angular 2

  • It really matters what you choose, what you depend on
  • One way of comparing frameworks is to look at trends, google trends, i.e. see what people are searching on
    • Ember – peaked and then decline
    • Backbone – bit older, didn’t peak as high as Ember, also dropping off
    • React – Much higher than previous 2, heading up sharply
    • Angular – dwarfs even React (1 and 2)
      • No decline
      • In fact, there’s an uptick at end (recently)
  • Good to find a library that people are using

Virtual DOM

  • Angular does this
  • Doesn’t directly manipulate DOM
  • Creates virtual from real DOM
  • Manipulates virtual DOM
  • Diffs/updates the real one when it matters
  • Dramatic performance increase
  • Less time spent in CSS rendering

Component Model

  • (also done in Angular 2)
  • Evolution: Pages to MVC to Components
  • Easier to reason about app structure (node on tree)
  • Excellent encapsulation
  • Excellent reusability
  • Forward compatible with components standard

Web architecture paradigms

  • Pages, MVC, Components
  • Enter app
  • Pages: at a page
  • MVC: Enter at controller
  • Components
    • Tree structure

Favorite things about Angular.js

  • Virtual DOM, component architecture
  • Cohesive, relatively complete framework
  • TypeScript
    • No more friction to turn into JavaScript
    • Can easily do new ES6 things
  • Rx.js for streams
  • Great support and community

Typescript

  • Beautiful
  • Valid Javascript is automatically valid Typescript
  • Can just rename .js to .ts
  • Can learn one concept at a time
  • Write Typescript, transpile into Javascript
  • Types
    • Compiler gives a bunch of help
    • Lots of pain goes away
  • String literal
    • Easily embed variable values
  • typescriptlang.org/play
  • TypeScript gives us types, interfaces
    • Just create interface, duck-typing, don’t need class

Easiest Way to get Started in Angular

  • npm install -g angular-cli
  • ng new myapp
  • Makes folder
  • Takes a while, doing npm install for new app
  • Nearly ready for production at this point
  • But takes some reverse engineering to figure out how everything is wired up
  • Has tests
  • Using Broccoli for task running
  • src folder has your code, built into dist folder
  • ng serve
    • Fires up port, transpiles TypeScript
    • In watch mode, in case something changes
  • ng generate component hello-world
    • Create new component
  • ng generate service greeter

Resources

  • angular.io
    • Quickstart
    • Tutorial
    • Developer’s Guide
    • Cookbook
  • cli.angular.io
  • github.com/angular/material2 (in beta)
    • For skinning, use material design elements
    • Looks like Google’s material design spec
  • codefoster.com/github/waterbug

Components

  • Tree of components
  • These are components of UI
    • Although some may not show up visually (e.g. script)
  • E.g. App, NavBar, Header, Body, etc.
  • Circular references are ok
  • Components can be ref’d by more than one parent
  • Flip switch and you’re creating new web component

Demo: Adding a component

  • From command line
    • ng generate hello-world
  • You’re sitting inside existing angular app
  • Gives you test for component
    • Tests travel with their components
  • .ts files (like .js)
  • Look at this in IDE
    • Use Visual Studio Code
    • WebStorm is alternative, but Code better for TypeScript
    • Even Google team is using Visual Studio Code
  • We see new hello-world folder
  • Look at .ts code
    • import–like require
    • Can bring in multiple pieces from single library, e.g. angular\core
  • @component is basically a class
  • @component decorator block
    • moduleId: allows using relative paths
    • Compilation error because we need to switch to common.js
    • Turns class into component based on decoration
    • selector – dictates tag, e.g. “app-hello-world” => <app-hello-world>
      • Set to hello-world
      • Can do conditional tags
    • templateUrl: hello-world.component.html
      • HTML that will contain this component
      • Could also inline the HTML, but generally worth having separate HTML
    • styleUrls: hello-world.component.css
      • Could have more than one
  • class
    • Constructor and onInit
  • Add hello world stuff
    • In .html

Run app

  • ng serve
  • not yet instantiating the component
  • Add component to app
    • app.ts
    • import { HelloWorldComponent } from ‘./hello-world/hello-world.component’;
    • On @Component, do new directive
      • directives: { HelloWorldComponent }

Services

  • Dashed circle in component tree
  • Just a class that you’ll use
  • Cascades
    • Everything below a service in the tree can use it
    • Service will be Singleton for everything under it
    • Could bring in at top, but not a great practice

Create service

  • ng generate service greeter
  • Create simple service that lets you ask for a message
  • Look at generated code
    • class GreeterService
    • Create new function
    • getMessage() { return “howdy”}
  • Call the service
    • import { GreeterService } from “../greeter.service”
    • Add provider in @Component: providers: {GreeterService}
    • Dependency inject into your constructor
      • Private modifier on parameter makes it available throughout rest of class
    • this.greetsvc.getMessage();
  • Make data available to your view
    • Could create local property and set property to result of call
    • Anything in class can be used in view
  • Update view
    • hello there {{ message }}
    • (where message is member property)
    • Lots of other richness for inserting data in the view

Debugging

  • Can do a lot of debugging in the browser
  • Augury as Chrome extension
  • Can give you Angular info at runtime

Streams

  • Let’s visualize imperative programming
  • Left to right, do one thing after another
  • Function call, comes back with something
  • Grammar review
    • Declarative: I’d like it if..
    • Interrogative: would you please do this
    • Imperative: fetch me a beer
  • Problems with this model
  • Better to build rule than build task
  • Want to do asynch, do something while waiting for function call to return

Imagine four quadrants

  • X Axis–Scalar vs. vector data
  • Y Axis–Sync vs. async
  • Sync
    • Vector: array
    • Scalar: T
  • Async
    • Scalar: promise
    • Vector: streams go here

Dealing with collections of things

  • Return multiple things back from asynchronous function
  • Sometimes array is huge and we get out of memory issue if we return everything all at once
  • Could use Generator
    • Hard to understand
    • Concept is function that can return in middle, giving you a few things at a time

Back to asynchronous problem

  • Callbacks not that great
  • You have to specify things up front

Promises are nicer

  • Gives us thing back right away
  • Chain well and form an asynchronous pattern

But generators and promises are not quite enough

Promises are a pain

  • Only one return value
  • Can’t cancel them
    • E.g. open web call that you can’t cancel

Array of promises?

  • Takes a lot of babysitting
  • Hard to orchestrate
  • Difficult to handle timing

[Sean: Session ran long and I had to leave at this point]

That Conference 2016 – C#: You Don’t Know Jack

That Conference 2016, Kalahari Resort, Lake Delton, WI
C#: You Don’t Know Jack – George Heeres

Day 2, 9 Aug 2016

Disclaimer: This post contains my own thoughts and notes based on attending That Conference 2016 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of the speakers.

Executive Summary

  • This talk presented a series of lesser known but useful C# skills

Strings

  • ANSI C
    • Array of characters
    • Ptr to null-terminiated array of chars
  • Getting length
    • More efficient to put length at start of string
  • Appending
    • Can’t just append in-place
    • Allocate new array, copy both pieces

C# Strings – What not to do

  • Long series of + operators to build up a string
  • Luckily, compiler makes this more efficient
  • A CIL diversion
    • Common Intermediate Language (aka MSIL)
    • Runs on CLI
  • 2nd example for concatenation
    • while loop that does + operator to append each piece
  • Better–use StringBuilder
    • Give it suggestion for size, should make best guess
    • Default capacity on StringBuilder is 16
    • Then doubles whenever it runs out of space

IFormatProvider

  • Format specifiers, built-in and custom
    • Putting object in format string, invokes ToString()
  • By default, ToString() returns type name
  • Can create custom IFormatProvider
    • E.g. if you want some custom format for telephone numbers
    • Bad–can’t globally register

String interpolation (C# 6)

  • Simplifies format string
  • Avoid problem with bad index
  • $”some test (varname) more string”
  • Compiler converts this to classic string format

Debugging helper

  • Attribute – DebuggerDisplay, give format string to indicate how to format in debugger

Operators: Ternary

  • ? : syntax
  • condition ? true : false
    • Return null if lefthand portion evaluates to null
    • Never nest ternaries
  • C#6: dice?.Values

Null coalescing operator

  • left ?? right
  • Return left if operand non-null, else return right
  • Good for lazy loading properties
    • return _dice ?? (_dice = init())

Operators: Implicit/Explicit

  • Write implicit operator, e.g. convert object to int, perhaps person.id
  • Allows assignment without cast operator

Operator overloading

  • Standard arithmetic operator

Enumerable: yield return

  • In method for IEnumerable, only execute code when the enumeration is traversed
  • Technically, violates MVC pattern
  • Worry about whether connection is still open at a later time
  • Bypass this–calling ToArray, ToList forces fetching all objects

Bit math |, &

  • Use in enumeration, set flags as power of twos
  • Can then store multiple values in single enumeration variable
  • Need [Flags]
  • Can create helper values in enum, combination of other enumerated values

Constructors – DRY > “Daisy Chain”

  • Don’t do the same initialization across multiple constructors
  • DRY – Don’t Repeat Yourself
  • Invoke another constructor with this keyword
  • Can go up or down (fewer or more parameters)

Extension methods

  • Just syntactic sugar
  • But very useful

Hacking: Decompiling Code

  • JetBrains et all
  • Decompile from IL, i.e. creates C# from IL

Hacking: Oh Snap

  • By default, anybody can decompile from your IL
  • Mitigate
    • Tools
    • Don’t store sensitive data
    • Move algorithms to server
    • .NET Obfuscator

Reflection

  • Review: can’t access private data
  • Can actually access using reflection
  • Example
    • GetType(), GetProperty(), GetValue()
    • BindingFlags Instance, NonPublic
    • Can also call method
    • Can also reflect on static data
      • Just need BindingFlags.Static
  • Generics
    • Get initial type, then construct generic type
    • GetConstructor or do MakeGenericType

Reflection: A Practical Example

  • Adding DisplayName attributes on enum values
  • This is a custom attribute
  • Then write extension method for the enum type, to get the display name
  • GetCustomAttributes on the type, find the attribute, cast

Crafstmanship: Refactoring

  • Best practices
  • Always leave code better than when you found it

“Magic” numbers

  • Move magic numbers to const int, define in just one place

Be expressive & declare intent

  • Move check into method, with name that tells you intent
  • E.g. IsValidTelephoneNumber()
  • Encapsulates ugly code, cleans up main code that you’re reading
  • Good code is self-documenting

Refactoring: String comparisons

  • Case insensitive comparison
  • ToLower or ToUpper–inefficient
  • Better
    • Use string.equals, specify StringComparison.OrdinalIgnoreCase

Refactoring: Intellisense

  • To add intellisense for your code, use XMLDoc in front of everything
  • [Sean: Also shows up in object explorer]

List<T> Properties

  • Returning List<T> or IList from class
    • Implies functionality present in IList that you may not have implemented
    • Instead, return IEnumerable if you’re returning read-only list

Extensions: DTO

  • In n-tier model, use DTO (Data Transfer Object)
  • Example, same DTO going from database all the way out to HTML
  • Problems
    • Leaky abstraction
    • If you change data access layer, changes go through every layer
  • Better
    • New data object at each layer
    • Tools like structure map can help map from one object to another
    • Or just use extension methods to convert from A to B
    • Remember–layer should not know anything about layer above it
    • Extension methods help with this
    • Conversion is in the outer layer, not the inner

Extensions: More.. Support IEnumerable

  • Take IEnumerable a, return IEnumerable b
  • Use yield
  • Or write with LINQ

That Conference 2016 – What’s that Smell? It’s your Entity Framework!

That Conference 2016, Kalahari Resort, Lake Delton, WI
What’s that Smell? It’s your Entity Framework! – Russ Thomas

Day 2, 9 Aug 2016

Disclaimer: This post contains my own thoughts and notes based on attending That Conference 2016 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of the speakers.

Executive Summary

  • Entity Framework often gets a bad rap when it appears to generate very poor T-SQL
  • You can generate far better T-SQL by paying attention to how you construct queries in LINQ
  • This talk describes several “code smells” that lead to poorly performing T-SQL and that can easily be improved
  • [Sean: Most of this also applies to LINQ to SQL]

All communication with SQL Server is via T-SQL

  • T-SQL is the only way to talk to the server
  • From T-SQL to database storage
  • Goes through Optimization (indexes, etc.)
  • There’s no such thing as a technology that bypasses T-SQL
  • Entity Framework adds 4th piece
    • EF -> T-SQL -> Optimization -> Database Storage
  • EF makes things easier because it quickly scaffolds T-SQL
  • Goal today–work towards higher performance in EF

Code smell #1 – too many columns

  • EF can bring back too many columns from database
  • Demo
    • Retrieve data from People table, do something with LastName, Firstname
    • Using simple LINQ statement, just do select p (e.g. from person table)
    • Then foreach on results, pull off LastName, FirstName
  • Look at what EF sends
    • Start up Profiler (yes, should use Extended Events)
    • Reminder: never do the trace in production
    • If writing this query manually in T-SQL, we’d do simple query
    • Look at trace from EF version of this
    • Query is selecting large list of columns–far more than we’re using
    • Can also look at Reads metric in Profiler
  • Recommendation
    • Specify exactly the columns that you want
    • Use Projection
    • instead of: select p,
    • use: select new {p.LastName, p.FirstName}
    • Goes from 544 reads to 4 reads
    • Query now looks like what we’d write ourselves

Code smell #2 – Non Set Based Work

  • Demo–Retrieving e-mails
  • Write a couple of loops, loop through People p, then loop through p.EmailAddresses (implicit join)
  • LINQ query is still returning entire People table
  • And then code is iterating through everything to get e-mail addresses
  • This results in a new T-SQL statement for each pass through the loop
  • Horrible performance
  • Don’t retrieve data procedurally
  • Relational DB works better if you retrieve using set-based query
  • Solution
    • In T-SQL, you’d just do a join
    • In LINQ
      • from people in db.People.Include(“EMailAddresses”)
      • Tells EF that we’ll be coming back later for e-mail address
    • Even better way than .Include
      • Do join in LINQ
      • Then only a small iteration on final results

Code first discussion and demo

  • Most people prefer model-first, but code-first is slick
  • Can write code and then generate DB from code
  • Discussion
    • By default, it set column type to nvarchar(max) for string
    • And decimal(18,2) for double (e.g. a salary field)
  • Conclusion
    • IF you’re going to do code-first, go further than just column names and types
    • SQL Server does care about max string length, makes educated guesses on storage
  • Better way
    • Override OnModelCreating in DbContext, use fluent syntax on modelBuilder to set stuff up
    • Set various attributes for each column, e.g. maximum
    • Re-gen model, can see better attributes on columns
  • Thoughts
    • Code-first, often starts as simple hobby project
    • But you really do need to set proper attributes ahead of time

How to see what EF is doing under-the-covers

  • Profiler
    • Don’t run on production server
  • So how do we see what’s going on in EF without using profiler?
    • SqlLogger class, IDBCommandInterceptor
    • Requires EF 6
    • Can write out to text file, capturing TSQL statements
    • You implement different methods to capture different kinds of things
    • Can do xxxExecuting or xxxExecuted
    • Then: DbInterception.Add(new MyCommandInterceptor());
  • Stuff is intercepted client-side
    • So doesn’t impact performance on server
  • [Sean: Can also set context.Database.Log property]
  • Other stuff you can do with interceptors
    • Intercept and change statement, e.g. “soft deletes” where they don’t actually delete but just set IsDeleted column
    • No trigger !

Other comments

  • There are other optimizations you can do that have nothing to do with SQL Server

Wrapup

  • Will put zipped up demo on sqljudo.com

That Conference 2016 – Unit Testing with xUnit.net

That Conference 2016, Kalahari Resort, Lake Delton, WI
Unit Testing with xUnit.net – Chris Gardner

Day 1, 8 Aug 2016

Disclaimer: This post contains my own thoughts and notes based on attending That Conference 2016 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of the speakers.

Executive Summary

  • xUnit.net almost always a better choice than MsTest or nUnit

Why a new framework?

  • MSTest – gets the job done, but not always pretty
  • nUnit – came from jUnit
  • xUnit – two guys from nUnit started

Single instance per fact

  • MSTest runs each method in a class
  • Xunit.net creates individual container per fact (test)
  • Different instances mean that they are completely independent
  • Same instance, order of tests matter, because instance has state

Everything in one place

  • MSTest has 3 different places for initialization (class, assembly, etc)
  • xUnit does this in one place
  • AAA
    • Arrange – set up test (only what I need)
    • Act – single operation
    • Assert – make assertions (preferably only one assertion)
  • When assertion fails, it skips others
    • So better to have just one assertion

No ExpectedExceptionAttribute

  • Shouldn’t ever do this
  • Why bad
    • You’re saying entire method could throw exception
    • You didn’t isolate where the exception could come from
  • Instead
    • try/block, assert/fail right at that point

Reduce custom attributes

xUnit reduces the number of custom attributes that you need to use

Use language features

  • xUnit takes advantage of C# features (e.g. lambdas)

Extensibility

  • Completely open source
  • Can add in tweaks without rebuilding engine
    • e.g. custom validation

What MSTest did right

  • Faster Discovery
    • When you re-compile, discovery has to happen
  • Test impact analysis
    • When you make code change, only run tests that would be affected by recent code change

xunit.github.io

Performance

  • xUnit runs separate classes in parallel
    • And different methods in one class serially
  • Can be more performant than MsTest

Comparisions

Comparing xUnit to other frameworks

That Conference 2016 – Tell SQL Server Profiler to Take a Hike

That Conference 2016, Kalahari Resort, Lake Delton, WI
Tell SQL Server Profiler to Take a Hike – Jes Borland

Day 1, 8 Aug 2016

Disclaimer: This post contains my own thoughts and notes based on attending That Conference 2016 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of the speakers.

Executive Summary

  • SQL Server Profiler has been deprecated
  • We should start using Extended Events (XE)

1998

  • SQL Server 7.0, introduced trace
  • Profiler was GUI for trace

Profiler

  • 2012–deprecated
  • Replaced by Extended Events

Extended Events

  • Started out bad, but much more usable now

Why use

  • More thorough
  • Less resource intensive
    • Profiler hits system hard, can bring server to its knees
  • Covers new features
    • Can no longer use profiler for some features
    • Same # events in profiler forever; but always adding XE
  • Azure
    • Can’t use Profiler, but can use XE

Where data is gathered

  • Post-processing vs. Pre-processing
  • Start session, Gather ino, Output
  • Profiler
    • Filters after gathering information
  • XE
    • Stuff filtered before going into buffer

Common Profiler uses

  • Main uses
    • Capture queries executed, their performance
    • Capture deadlocks
    • Capture errors and warnings
  • Can use Extended Events for all the same things

Capturing all queries with Extended Events

  • Management | Extended Events
  • Sessions–already running sessions
  • system_health always running
  • Rt-click: New Session… (don’t use wizard)
  • Can start when server starts
  • Events
    • degree_of_parallelism – help capture every event on an object
    • sp_statement_completed, sql_statement_completed,
  • Must click Configure to pick database
  • Events have fields (properties)
    • e.g. cpu_time
    • Clicking on event, you see properties
  • Global fields
    • Properties that all events have in common
    • Common: client_app_name, database_name, query_hash, sql_text
  • Filter / Predicate
    • Select all and then filter applies to everything
    • e.g. database_name
    • duration > 5 sec
  • Targets
    • ring_buffer
      • Uses memory
      • Set # items
    • event_file
      • Saves to directory on server

Viewing data

  • New guy under sessions
  • Rt-click | Start
  • Expand, see targets
  • Rt-click | View Target Data
  • Choose Columns
    • Pick what you want to see
  • Sort
    • Don’t do on >1,000 rows
  • Can view data
    • Export to XEL (XML), CSV

Can script out the session

  • Readable text that looks similar to TSQL

XEL file

  • One copy per session, per run

sa permissions?

  • Don’t need
  • Profiler: Need ALTER TRACE
  • XE 20012+: ALTER ANY EVENT SESSION
  • XE is a bit more secure–can capture events but not necessarily read

Start/stop at particular time

  • Use SQL Server Agent

Existing traces

  • Can convert to XE sessions
  • Can find equivalents
  • Mapping

That Conference 2016 – Common TSQL Mistakes

That Conference 2016, Kalahari Resort, Lake Delton, WI
Common TSQL Mistakes – Kevin Boles

Day 1, 8 Aug 2016

Disclaimer: This post contains my own thoughts and notes based on attending That Conference 2016 presentations. Some content maps directly to what was originally presented. Other content is paraphrased or represents my own thoughts and opinions and should not be construed as reflecting the opinion of the speakers.

Executive Summary

  • By fixing some of the most common mistakes made in SQL queries and stored procs, you can vastly improve the performance of your application
  • Kevin sees the same mistakes over and over again
  • In each case, Kevin presents the mistake and describes a proposed alternative

Miscellaneous Goodies

  • tsqlt.org  – Unit testing framework for SQL Server
  • Always specify NOT NULL for a column, unless you truly expect to handle NULL values
  • You want Index Seeks in query plans, not Index Scans. [SPS: Or, worse yet, Table Scans]
  • Don’t do formatting in SQL Server, but do in middle tier or client instead
  • Always SET NOCOUNT ON at the top of a stored procedure
  • Tune to the 2nd execution, not the first
  • Every trigger must be able to process n rows. The trigger is invoked only once when doing batch updates
  • Catch-all query leads to poor performance (e.g. WHERE @a=a or @a is null)
  • Don’t use table variables–no statistics on columns

NULLs and the NOT IN predicate

Problem arises when using NOT IN against list of values that contains NULL. In the example below, if there’s at least one Product with a NULL value for Color, the SELECT statement returns nothing.

SELECT C.color
FROM Colors AS C
WHERE C.color NOT IN (SELECT P.color
                      FROM Products AS P);

The fix is to use NOT EXISTS instead of NOT IN.

SELECT C.color
FROM Colors AS C
WHERE NOT EXISTS(SELECT *
                 FROM Products AS P
                 WHERE C.color = P.color);

Functions on indexed columns in predicates

Using functions on indexed columns in a WHERE clause, the index is not used. I.e. This leads to an index scan, rather than a seek. In the example below, every row of the table is read.

SELECT customer_name
FROM Customers
WHERE LEFT(customer_name, 1) = 'L';

The fix is to not use functions on indexed columns in the predicate.

SELECT customer_name
FROM Customers
WHERE customer_name LIKE 'L%';

Incorrect subquery column

It’s easy to get mixed up when writing a subquery and use a column name from the main table, rather than from the table in the subquery. In the example below, the Sales table has a sale_date column and the Calendar table has a calendar_date column. The subquery uses the wrong column name.

SELECT sale_date, sale_amount
FROM Sales AS S
WHERE sale_date IN (SELECT sale_date 
                    FROM Calendar AS C
                    WHERE holiday_name IS NOT NULL);

The fix is to use the correct column name.

SELECT sale_date, sale_amount
FROM Sales AS S
WHERE sale_date IN (SELECT C.calendar_date 
                    FROM Calendar AS C
                    WHERE C.holiday_name IS NOT NULL);

Data type mismatch in predicates

When you have data types that don’t match between a parameter and a column used in a predicate, or between different columns when doing a join, SQL Server does an implicit conversion, which results in poor performance. Watch out for CONVERT_IMPLICIT in query plans.

For example, assume that last_name column in Customers table is VARCHAR(35), but you have a stored proc parameter of type NVARCHAR(35). In the example below, we get poor performance because SQL Server does an implicit conversion and the index is not used.

CREATE PROCEDURE GetCustomerByLastName
 @last_name NVARCHAR(35)
AS
 SELECT first_name, last_name
 FROM Customers
 WHERE last_name = @last_name;

The fix is to make sure that the data type of the parameter matches the column’s data type.

Predicate evaluation order

You can’t rely on evaluation order in predicates. That is, you can’t assume that sub-expressions in an AND clause are evaluated left to right and try to rely on short-circuiting the expression.

In the example below, assume that the account_reference column can be either numeric or character-based (bad design) and that checking account_type checks for this. The query fails because there’s no guarantee that the check against account_type happens before the CAST.

SELECT account_nbr, account_reference AS account_ref_nbr
FROM Accounts
WHERE account_type LIKE 'Business%'
  AND CAST(account_reference AS INT) > 20;

A workaround for this particular example would be to use a CASE statement.

SELECT account_nbr, account_reference AS account_ref_nbr
FROM Accounts
WHERE account_type LIKE 'Business%'
  AND CASE WHEN account_reference NOT LIKE '%[^0-9]%' 
           THEN CAST(account_reference AS INT)
      END > 20;

Outer joins and placement of predicates

When doing an outer join, the predicate (WHERE clause) is applied only after including the “outer” rows after applying the ON clause to do the join. This means that the WHERE clause may unexpectedly filter out rows that were intended to be included by virtue of doing the outer join.

In the example below, customers with no orders are not included in the final result, as intended. They get filtered out because they have an order_date of NULL after bringing the outer rows back into the query result.

SELECT C.customer_name, SUM(COALESCE(O.order_amt, 0)) AS total_2009
FROM Customers AS C
LEFT OUTER JOIN Orders AS O
  ON C.customer_nbr = O.customer_nbr
WHERE O.order_date >= '20090101'
GROUP BY C.customer_name;

The solution is to move the date check inside the ON clause.

SELECT C.customer_name, SUM(COALESCE(O.order_amt, 0)) AS total_2009
FROM Customers AS C
LEFT OUTER JOIN Orders AS O
  ON C.customer_nbr = O.customer_nbr
 AND O.order_date >= '20090101'
GROUP BY C.customer_name;

Subqueries that return more than one value

Using a subquery to retrieve a value from a second table can backfire when data changes and the subquery no longer returns a single value, as expected. The query below will work only if the sku exists only once in ProductPlants.

SELECT sku, product_description,
      (SELECT plant_nbr
       FROM ProductPlants AS B
       WHERE B.sku = A.sku) AS plant_nbr
FROM Products AS A;

The fix is to do this in a JOIN.

SELECT A.sku, A.product_description, B.plant_nbr
FROM Products AS A
JOIN ProductPlants AS B
  ON A.sku = B.sku;

Use of SELECT *

Do not use SELECT * in production code. Your use may make assumptions about the schema, e.g. inserting data into a second table. A query might then break if the schema changes.

You can also run into problems when using SELECT * in a view, when columns are later added or deleted. It’s a good idea to use SCHEMABINDING when creating views.

Scalar user-defined functions

Using a scalar user-defined function in a query can result in very poor performance, because the function can get applied for every single row of a target table.

As an example, assume you have the following function:


CREATE FUNCTION dbo.GetTotalSales(@sku INT)
RETURNS DECIMAL(15, 2)
AS
BEGIN
  RETURN(SELECT SUM(sale_amount)
         FROM Sales 
         WHERE sku = @sku);
END

and you use the function as follows:

SELECT sku, product_description, dbo.GetTotalSales(sku) AS total_sales
FROM Products;

This will result in very poor performance. The solution is to rewrite the function as a table-valued function or to rewrite the main query.

SELECT P.sku, P.product_description, SUM(S.sale_amount) As total_sales
FROM Products AS P
JOIN Sales AS S
  ON P.sku = S.sku
GROUP BY P.sku, P.product_description;

The table-valued function would look like:


CREATE FUNCTION dbo.GetTotalSales(@sku INT)
RETURNS TABLE
AS
RETURN(SELECT SUM(sale_amount) AS total_sales
       FROM Sales 
       WHERE sku = @sku);

Overuse of cursors

It’s generally a bad idea to use cursors. You can almost always do the same thing by writing a normal (i.e. set-based) query.