Powerdelivery 2.0.6 adds pretty 2012 summary page, delivery module API, and build asset cmdlets

I’ve just pushed out another update to powerdelivery. You can get it on chocolatey (‘cup powerdelivery’ if it was already installed ‘cinst powerdelivery if not’).

Pretty Build Summary on TFS 2012

Behold the new summary page when you run your builds in TFS 2012:


The output you see above is included by default without any code needed on your part. You can however write your own messages into these sections using the new Write-BuildSummaryMessage cmdlet. This cmdlet only works on TFS 2012. When you specify the “name” parameter of the cmdlet, pass the name of a delivery pipeline code block (“Init”, “Commit”, or “Deploy” for example) to have your message show up in that section.

Delivery Module API

This is a feature I’ve been wanting to add for a while now. One of the nice things about powerdelivery is that instead of having to learn all the moving parts in Windows Workflow foundation, you just have a PowerShell script and CSV files. However, there are some things you might do over and over again in your script that you want to drive with configuration to eliminate having to place code in your script for them. Examples are deploying databases, websites, etc. There are already cmdlets in powerdelivery to do this, but I’ve introduced a new Delivery Module API that can be used to create even more powerful reusable modules for delivering different types of assets.

To create a delivery module, you need to create a regular simple PowerShell module following any of the instructions you’ll find on the web and make sure it’s available to your script (putting the folder that contains it in your PSMODULEPATH system environment variable is easiest). Pick a name that’s unique and from then on you may use that module in your scripts by using the new Import-DeliveryModule cmdlet. If you call this cmdlet by passing “MSBuild” for example, powerdelivery will try and invoke functions that match this name following the convention below:


Where “ModuleName” might be “MSBuild” and “Stage” might be “PreCompile” (for example, Invoke-MSBuildDeliveryModulePreCompile). You can name the functions in your module starting with “Pre” or “Post” to have them run before or after the actual “Compile” function in the build script that imports the module.

If this function is found it is invoked before or after the pipeline stage specified, and you can do anything you want in this function. A common use would be to look for CSV files with values you can pass to existing functions. I’ve included an initial MSBuild delivery module as an example of how to do this. It looks for a file named “MSBuild.csv” and if it finds it, will build any projects using the settings in that file.

I’m still working on this API so any feedback is appreciated.

Build Asset Cmdlets

A common operation in your build script is copying files from the current “working” directory of your build on the TFS build agent server out to the “drop location” which is typically a UNC path. To make this easier I’ve added the Publish-Assets and Get-Assets cmdlets which are quite simple and take a source and destination path. These cmdlets will push and pull between our local working directory and the drop location without having to specify a full path and should simplify your script. The build summary page on TFS will display any assets you publish.

Coming Soon

The next two things I will be completing is the integrated help for all included cmdlets (so you can get syntax help in PowerShell instead of on the wiki), and updating the existing templates other than the Blank one to match the new syntax in the 2.0 release. I’ll then be working on getting more templates and delivery modules put together to help you hit the ground running with your delivery automation.

powerdelivery now supports Microsoft Team Foundation Server 2012

The open source powerdelivery project I’ve been blogging about using to enable continuous delivery automation for your software projects has now been updated to support TFS 2012.

Creating new projects using TFS 2012 powerdelivery

To add powerdelivery to a TFS 2012 project, just pass the –tfsVersion parameter to the AddPipeline utility specifying 2012. For instance:

    .\AddPipeline.ps1 (other parameters) –tfsVersion 2012

If you are using Visual Studio 2012 as well for your developers, you will probably also want to pass the –vsVersion parameter and specify 11.0 (10.0 is the default, which is Visual Studio 2010).

Upgrading existing projects to TFS 2012 powerdelivery

If you were using TFS 2010 and want to upgrade your server to 2012, to upgrade your powerdelivery-enabled projects to use the 2012 configuration, grab the latest zip from master and add the following files to source control from the source:


Lastly, you will need to edit the Build Process Template for your “Commit” build and set it to use the PowerDeliveryTemplate11.xaml file (instead of PowerDeliveryTemplate.xaml) and edit the rest of your builds (Test, CapacityTest, Production) to set them to use PowerDeliveryChangeSetTemplate11.xaml.

Happy continuously delivering!

Deploying Microsoft Business Intelligence solutions with powerdelivery

I’ll discuss continuously delivering web and mobile applications in a later post, but since I just recently worked on one I’d like to start with guidance for using powerdelivery to deliver a Business Intelligence (BI) solution. This solution might deliver databases, cubes, packages, reports, or SharePoint dashboards like PerformancePoint.

I’ll assume at this point you have a backlog of user stories that describe business capabilities that are desired. You’ve also had your sprint planning meeting and come up with acceptance criteria for what you’re going to build in your first sprint. You desire to iteratively release pieces of acceptance tested BI functionality. You want to reduce your cycle time, and thus the time it takes to go from an idea for BI functionality, until it is delivered to users.

In a nutshell you want to release to users as quickly as possible after an idea is conceived!

Perform the following steps initially by only deploying to your “Commit” (development) build environment. Once you have it working there, you should immediately escalate it to test, and then to production to get your initial release in place even if it has no functionality yet!

Step 1: Create build account

Have your IT administrator create a user account for builds to run under. Configure Team Foundation Server to build using this account (see the TFS agent service on your TFS agent computers).

Step 2: Procure and secure environment

Identify the environments that will be used to deliver your solution. Determine the desired production environment’s physical and virtual nodes (servers or VMs), and procure a development and test environment with at least one node for each scaled point of the production environment. For example, if you have a load balancer with 5 nodes in production, have a load balancer but with just one node in development and test environments. Ideally, you should setup these computers from scratch with identical OS and middleware (SQL server for example) and lock them down only be accessed by the Active Directory build account. All future configuration changes will be made via your build.

Step 3: Open ports

Have your IT administrator open ports for the TFS agent to perform automation activities. Ports 5985 and 5986 must be open on each computer to be automated (development, test, and production database and SharePoint server computers or VMs for example). You also need to open any ports needed to perform other automation activities and send data between computers. Examples are port 1433 for SQL server, or port 2383 and 2384 for SSAS (multidimensional and tabular defaults, respectively). You may also need to open port 137 to allow UNC paths to be accessed.

Step 4: Create delivery pipeline

Next, use the AddPipeline utility to create builds for your deliverable. You want to create a build for each capability you need to independently deploy. If you have databases, SSIS packages, and cubes that all work together to provide one data mart, web site, or BI capability, you may want to keep these in a single build for simplicity of maintenance.

Step 5: Create UNC shares on environment nodes

Create a UNC share on each of your computers that will have SSIS packages, SSAS models, databases, or other assets deployed to them as files. Give write permission to the Active Directory account your build runs under to these shares.

Step 6: Record your environment configuration

Edit the .csv files for your powerdelivery build’s environment configuration and add name/value pairs for each setting that is environment specific. Include settings for things such as:

  • Database servers
  • Database names
  • Computer names
  • UNC paths
  • Tabular server instances
  • Connection strings

Tip: If you have multiple nodes in production and only one in development and test but need to automate configuration, use a PowerShell array as the value in the .csv file and loop over it in your script for each node. For example in production:


In development:


Step 7: Identify subsets of data for automated testing

As you identify source data over sprints, create views (if they don’t already exist) to abstract the data and use a database migration technology such as RoundhousE or Visual Studio Database Projects to deploy them with your build. If an existing system that one of your components is sourcing data from has millions of rows, you want to have development versions of these views that only pull a subset of your data suitable for testing. The views in your UAT and production environment will pull the entire data set.

The reason for this is to speed up builds and processing in your commit (development) environment so that automated tests can run and provide feedback faster than what would be necessary when processing the entire set of data. You will need to lean heavily on experts of the data, and business analysts to either identify a subset of data that exists, or script creation in an empty database perhaps of new data suitable for running your tests.

Step 8: Load your environment configuration

Modify your build script to create global variables for all the environment configuration settings you filled out in step 6 in the Init function of the build.

Step 9: Copy assets to the drop location

Modify your build script further by editing the Compile function. Use this function to copy any files you will need during deployment from your current directory (where TFS grabbed a copy of the source code) to the drop location. These files include dlls, SSIS packages, .asdatabase files, .sql scripts, powershell scripts, or any other files that will be deployed to the environment computers.

Step 10: Script environment modifications

If the latest version of your code requires a dependency of a configuration change, use this function to do so.

Examples of things to do here are:

  • Set environment variables. Your SSIS packages or other deliverables will pick up the correct settings for each environment.
  • Use the Azure PowerShell cmdlets to procure an Azure node or database
  • Give security permission to users on SQL databases
  • Run NuGet to install .NET dependencies (log4net, Entity Framework etc.)
  • Apply OS or middleware patches
  • Change security settings of services or file system

Step 11: Script deployment activities

In the Deploy function of your script, you will perform the steps to actually deploy the solution assets. You can only use files in the build’s drop location, so a common problem here will be to try and run something that’s in source control but you didn’t copy in the Compile function. If this happens, go back to step 9.

Examples of things to do here are:

  • Run SQL scripts
  • Deploy and/or run SSIS packages
  • Create and/or run SQL jobs
  • Deploy and/or process SSAS cubes (BISM Multidimensional or Tabular)
  • Start/stop SQL jobs

Tip: Your script is running on your TFS build agent computer. But many of the deployment utilities used for automation, such as dtexec.exe (to run SSIS packages), or Microsoft.AnalysisServices.Deployment.exe (to deploy cubes) will run on the database or other servers. Because of this, you will often need to copy files that are input for these commands to one of the UNC shares you setup in step 5.

Step 12: Script unit tests

In the TestUnits function of your script, you can run tests that make sure that individual units of your solution work independently. You might use MSTest.exe to run Visual Studio database unit tests written in T-SQL.

Step 13: Script acceptance tests

If you automate one kind of test, at least do the acceptance ones. These are what you described to the business during the sprint planning meeting that you’d show. Read my prior post on defining acceptance if you are new to it.

Because most BI solutions use scheduled batch jobs to do processing, you will need to cause jobs or any other sort of processing to run in the Deploy function in step 11 if you want them available to be tested here.

Since powerdelivery is a synchronous script, you have two options for doing acceptance testing:

Inline publishing of tests

With this approach, your script should run the SSIS packages or SQL scripts that a job normally invokes on a schedule and and wait for them to finish. You can then run your tests immediately after and automate publishing their results if you want business users that access the TFS portal to see the results of your testing.

Late publishing of tests

With this approach, you can let the build finish and wait for scheduled SQL jobs or processing to finish by monitoring the process. When it completes, run your tests using Visual Studio manually and publish them to the build they were run against.

Step 14: Automate restoring test databases from production

To make sure that any database alterations made in production are tested exactly as they will work in production, add a step to the Deploy function to take a production backup and restore it over your test databases prior to running these alterations. Only run this step while doing a test environment build. You can check for this using the powerdelivery Get-BuildEnvironment helper cmdlet.

Promoting your releases

At this point, you should get your build to succeed in the Commit (development) environment and then immediately promote it to Test, and then Production. As you plan sprints, use the knowledge you gained setting up the script to help team members estimate the work it will take to make modifications to the build to deploy and change the environment needed by their features. Include this in your estimate for user stories you deliver in the future.

When one or more team members has their feature passing acceptance tests in the Commit environment, promote that build to your Test environment and have it inspected by QA. If it is of sufficient quality, hold the sprint review meeting. If the business decides to release to production, queue a Production build and release it to your users.

5 advantages of powerdelivery over vanilla Team Foundation Server

The open source powerdelivery project for doing continuous delivery I’ve blogged about does a lot of things in a small package, so it’s natural to want to understand why you would bother to use it if perhaps you like the concepts of continuous delivery but you’ve been using off the shelf Microsoft Team Foundation Server (TFS) builds and figure they are good enough. Here’s just a few reasons to consider it for streamlining your releases.

You can build locally

TFS lets you create custom builds, but you can’t run them locally. Since powerdelivery is a PowerShell script, as long as you fill out values for your “Local” environment configuration file you can build on your own computer. This is great for single computer deployments, demoing or working offline.

You can prevent untested changes from being deployed

You have to use the build number of a successful commit build as input to a test build, and the same goes for moving builds from test to production. This prevents deployment of the latest changes to test or production with an unintended change since the last time you tested and when you actually deploy to production.

You can add custom build tasks without compiling

To customize vanilla TFS builds, you need to use Windows Workflow Foundation (WWF *snicker*) and MSBuild. If you want to do anything in these builds that isn’t part of one of the built in WWF activities or MSBuild tasks, your stuck writing custom .dlls that must be compiled, checked back into TFS, and referenced by your build each time they change. Since powerdelivery just uses a script, just edit the script and check it in, and your Commit build starts automatically.

You only need one build script for all your environments

You can use the vanilla technologies mentioned above to create builds targeting multiple environments, but you will have to create a custom solution. Powerdelivery does this for you out of the box.

You can reduce your deployment time by compiling only in commit (development)

The Commit function is only called in Local and Commit builds, so you can perform long-running compilation activities only once and benefit from speedy deployments to your test and production environment.

Introducing powerdelivery

Today I’m beginning a series of posts on Continuous Delivery with Team Foundation Server and Windows PowerShell based on my open source project powerdelivery. The project is already in use at a business intelligence solution for a large healthcare provider.

In 2011 I was working at a large client who had a hard time releasing software. People would forget to make environment changes (like server patches, environment variables, and settings) when doing a production deployment that were in development, and vice versa. The production system would go down and everyone would stress out, and releases happened very infrequently because things were uncoordinated and fragile.

Around that time, I read the book on Continuous Delivery from Jez Humble and found within it a solution. It talks about using a Delivery Pipeline to promote builds through the environments your code runs in so that changes don’t get out of synch with each other. There’s many benefits and techniques to Continuous Delivery other than that, and many of them rely on having a team that follows SCRUM to release in sprints, defining their work with easily demonstrated acceptance criteria, and using automating tests that repeatably do so.

I loved the concepts in the book and wanted to start using them on my own projects. I work for Catapult Systems, a Microsoft Gold Partner as a consultant and our primary platform for builds is Team Foundation Server. Unfortunately TFS gives you an easy way to compile a project with test gates to one environment, but doesn’t give you much for coordinating the promotion of builds from one environment to the next like you’d need for a Delivery Pipeline. So I built one on top of it.

What’s Included

  • A framework for writing a single PowerShell script that deploys your software into multiple environments.
  • Templates with included build scripts that you can use as a starting point for your builds.
  • A mechanism for storing environment configuration settings in .csv files. For instance, your local computer, development, test, and production environment probably use a different database.
  • PowerShell cmdlets that help with writing builds.
  • Build process templates for Microsoft Team Foundation Server that trigger your script.
  • An Add Pipeline utility that will add powerdelivery to your Microsoft Team Foundation Server project.

Where do I get it?

Read the wiki and grab a copy (from source) off of github. You don’t have to compile anything to use it, but you’ll want to read the requirements in the documentation for getting TFS and PowerShell setup.

What’s the delivery workflow?

Overcoming your quality debt to enable continuous delivery

If you want to release software faster, it’s a given that you need automated deployment capabilities, and an incremental product change process like SCRUM – but you also need confidence in the code base. This confidence comes from having agreement on what the application currently does, and being able to verify quickly at any time that it still works as users expect.

Acceptance defines how you demonstrate it is working

Even in an agile development process such as SCRUM it is of the utmost importance that you define acceptance criteria that can be quickly verified. When the business and the team agree to implement something in a sprint, coding shouldn’t be started until a description exists in English of how to demonstrate that it works properly. This description provides both a script for the developer to demo the functionality to the business, and a set of steps to guide the developer or a QA resource to implement an automated test that verifies it works correctly without requiring manual testing.

It is important that the acceptance criteria leave as little room for interpretation or the need to get more info from the author of the criteria as possible.

Bad acceptance criteria

Go to the products page. Click on a product and enter in purchase details on the next page. Submit the order and make sure there aren’t any errors.

Good acceptance criteria
  1. Go to http://mydevserver/somepage
  2. Click the product “My Test Product”
  3. In the page that appears, enter the following values:
    1. Name: Test
    2. Quantity: 6
  4. Click the “Submit” button
    1. On the page that appears, the message “Product ‘test’ was added to your cart’ must appear on the page.
    2. An email must have been sent that includes the subject ‘About your order from XYZ corporation’. The body should contain the message in the attachment to this user story where ‘Product name’ in the attachment is replaced with ‘Test product’ and ‘User’ is replaced with the account you were signed in as when you made the purchase.

Quality products are not an accident

When you haven’t defined acceptance criteria that is either written up or contained in an existing automated test that is self documenting, there is functionality in your application that is subject to change at any time. It can change in that if refactoring is done, the existing functionality can break without catching it with a manual test – but even worse, there is nowhere the team can go to understand its original intent and the reason for its existence in the first place. This leads to incorrect assumptions when enhancements to it are made, disagreement between team members in communicating its operation and intent, and a deceptive view on its completeness.

In a green field project, it’s easier to enforce automated acceptance testing being done as you can hold back kicking off the next sprint until all acceptance tests can demonstrate passing and the features are demoed and accepted by the business. The business and the team get used to the velocity needed to do so, and the rate at which new features are available.

In a brown field (existing) project, chances are there will be significant functionality that already exists for which no automated tests exist, verifying it must be done through manual testing, and in a worse scenario; it may not be tested regularly at all and the business has lost clarity on its behavior and operation. To automate verification of it working properly will cause the velocity of the team to slow down.

When this happens, a team gets into what I like to call “quality debt”. The more features that exist in the application that are unverified, or the longer it takes to verify those features, the more investment it takes to ensure quality of the application – and in turn the slower you can release. This situation is in direct opposition to continuous delivery, and to get your product ready for quicker cycle times a strategy for paying down this debt is crucial. The higher up this scale you are, the lower your debt.

Evaluating acceptance readiness for continuous delivery in existing functionality

If you are on a brown field project and want to set a goal to reduce your release cycle time, the team needs to evaluate each function of the software to determine how far along this progression of acceptance readiness it is.

  1. Implemented – the software does something in the code, and it was demonstrated at some point. No tests or documents exist that describe it however.
  2. Partially verifiable – a document or test exists somewhere that describes how the function should work, but it has grown out of sync with changes over time of the feature, or is incomplete and does not cover all cases.
  3. Acceptance testable – a document exists somewhere that describes how the function should work, but it takes manual effort to verify it each time a release grows near.
  4. Automated acceptance – Acceptance tests exist that can be run automatically on demand and report their success or failure. They cover all implemented aspects of the feature as of the current sprint.

Pay off existing debt

If your project’s features are not at phase 4 (Automated acceptance), you will need to schedule time in coming sprints to help them get there. I would caution against jumping from implementation to automated acceptance without establishing the acceptance criteria again. If there is no document or test that says what a feature should do, automating it without making sure it meets the needs of the business (and that what it does is understood) is dangerous because your are investing in automating something that is more likely to change. Revisit each existing feature with the business and come up with acceptance criteria first, then automate.

Don’t add new debt

In addition to scheduling a portion of the workload of the sprint to pay down the existing debt, avoid adding to it by defining better acceptance criteria in user stories scheduled for completion that relate to new work. Also require that automated acceptance tests be built before the conclusion of the sprint that prove proper operation of those stories. You will find that you can get less done in the sprint, but at each one’s conclusion you have features that can be released to users with confidence as they can be verified automatically at any time.

The pain only lasts as long as you want it to

The harsh truth is that the more your existing application does that is not automated for acceptance, the more painful it is to get out of debt. An organization can continue to go about business as usual without addressing it, but doing so leaves them at a competitive disadvantage as they take longer to get to market with new functionality. Organizations with better discipline around managing their workload and establishing automatable acceptance processes will respond more quickly to changes in the market, deliver features that meet the needs of their customers faster, and deliver them with more consistent quality.

Keep your agile promises by validating acceptance and reducing your sprint workload

Most project teams have tried some permutation of an agile or SCRUM process by now, and a consistent theme amongst those I see on consulting engagements is a failure to deliver the work done in a sprint to users before starting the next one. Continuous integration, standup meetings, and backlogs are usually present, and some will even try test-driven development. But at the end of the sprint, the work is still not ready to deliver to users.

At the end of a sprint, there is a meeting that takes place where stakeholders and the development team get together to review the work that was done. More often than not, the stakeholders like what they see to some extent, but find discrepancies between what they thought they were getting and what was actually implemented. In every case the reason this occurs is a failure to establish acceptance criteria prior to doing the work.

An agile or SCRUM process is usually sold to the business as a way to get more communication going between the development team, and an ability to shift priorities since the backlog can be prioritized; and the work assigned to development in the next sprint allows for more flexibility than a waterfall approach with typical several month to year cycles for releases. Additionally, higher quality is usually promised to the business.

Inexperienced agile teams may read extreme programming and story-driven approaches and like the fact that requirements are sold as only needing to be short statements and not detailed use cases as happens on a waterfall project. They often take this to the extreme, in that a simple description of the work is established in the backlog, and this is the only agreement that can be pointed to with certainty at any point in the process.

A user story in an agile or SCRUM development team’s backlog should be a promise for a future conversation. When a story is scheduled into the next sprint and that sprint starts, the first activity that should take place is the business stakeholder most intimate with the story has a conversation with the developer who is going to do the work, and a QA person who is responsible for validating acceptance of the work is also present.

During this conversation, the goal is to establish acceptance criteria. This focus on criteria provides several benefits. First, it allows the business more time to provide details about the story that they may not have originally communicated when it was placed on the backlog. Second, it allows the developer a chance to communicate technical challenges with the business’ vision, and gives the two parties a chance to come to a compromise in design that will sufficiently meet the needs of both. Thirdly, it enables QA to think about possible ways in which it may be tested, which often leads the business and development representatives to further clarity. The last, which is the subject of this post, is to establish what constitutes successful completion of the work.

To do so does not require a use case, or a large document. Rather, the group should be able to walk away with a description in English of ways in which the user or system will interact with the software that, if successful, validate that the work was done correctly. The level of detail you go into with your description is up to your team, but the more detailed, the more sure you can be that what will be completed at the end of the sprint will be ready for delivery to users. If you are building a calculator for example, you may establish several mathematic calculations that have to succeed to consider it acceptable. Every possible calculation does not need to be present here; but rather enough that it would be difficult to meet the acceptance criteria and still deliver a low quality feature.

Once this acceptance criteria is established, it is of great importance that the person responsible for user acceptance, typically a QA representative, works with the developer to write automated acceptance tests (highly preferred) or come up with a manual testing process that can be used to verify it. This work can be done before the code is written (test-first acceptance) or during (parallel acceptance) but do not leave this until the end. It is highly important that developers are able to execute the acceptance tests, whether automated or manually, several times during the sprint to gauge their progress towards completing it.

Because this acceptance criteria is established up front, it helps developers to focus on delivering precisely that functionality and also reduces the chatter that often happen in lieu of this as a developer attempts to get clarification from the business about details that were not there at the beginning. That being said, if the team is inexperienced with defining acceptance, the first few sprints may result in two undesirable side effects.

The first of these is that since developers are not used to having to deliver acceptance tests along with the work itself, there is a good chance that too much work will be scheduled in the sprint, and some of them may be late. It is of great importance that the entire team – the business, QA, and development accept this possibility and use it as a learning experience to discover what a reasonable amount of work to deliver in a sprint looks like when it has to be accepted and deliverable before the sprint is over. The next sprint will likely deliver less functionality in the same amount of time, but will be done in time for the end of the sprint. This is the difference between what I’ve heard others call “agilefall” or “waterscrum”. That being the cherry-picking of practices from an agile/SCRUM process and failing to deliver on the promises.

The second side effect of this process change that may be felt when first implementing it on your project is that there will still be some things missing from what was delivered and what the business expects. Let me be clear here – it is perfectly normal, and actually a great benefit to using an agile process that the business can see something every 2 weeks (or however long your sprint is) and upon doing so, provide additional detail and changes that can be scheduled for the next sprint. However the entire delivery team needs to get better at articulating what they do plan to deliver in a way that can be acceptance tested and is clear to the developer so what is agreed upon is not open to interpretation.

This subtle difference is important – it is unrealistic and illogical for the business to attempt to hold developers accountable for not delivering functionality at the end of a sprint for which acceptance criteria could not be defined. If the business wants developers to do a better job at delivering what they want, they must improve their ability to articulate it, or simply embrace the great flexibility that comes with an agile process to allow them to figure out more about what exactly they want every two weeks.

One more change needs to occur to your process to allow for the work that is done in the sprint, now backed by acceptance criteria, to be delivered to users at the end. The developer should allow for time to meet with operations personnel or whoever maintains the various environments (development, acceptance, and production for example) to ensure that they can actually deliver it to users at conclusion of the end-of-sprint meeting. The business may still decide that it is not ready for users from a functional standpoint, but the goal should be for the functionality delivered in each sprint to be of high enough quality to deliver to users immediately following conclusion of the sprint should they desire. Refer to my post about the dangers of making production an island and not optimizing your build process for quick escalation from a user acceptance environment into production for more information on how your operations team can deliver deployment scripts along with the development team.

Think for a moment about the net result of all of this. At the end of a sprint, QA can demonstrate that the functionality delivered meets the acceptance criteria not that they or a developer came up with, but the business as well. We’ve all seen the project where QA said a feature was tested but the business is upset with them and the developers for what was delivered. Developers also do not have to feel any anxiety that what they deliver will not be acceptable. They should however be comfortable with the fact that upon seeing the work, the business may want things changed or additional functionality put in. This is exactly why businesses usually agree to trying out an agile/SCRUM process in the first place.

Even though less functionality is delivered at the end of a sprint than your team may be used to due to having to include time for defining acceptance criteria, building automated or manual acceptance processes, and getting that functionality deployed into a user acceptance environment – the net result is that your business truly will be able to deliver new functionality to users every two weeks. This outcome alone is more important than any of the individual agile or SCRUM processes – that of continuously delivering real value to your users.

Put all your environment-specific configuration in one place for a pleasant troubleshooting experience

Most software applications leverage a variety of third party libraries, middleware products, and frameworks to work. Each of these tools typically comes with its own method of configuration. How you manage this configuration has an impact on your ability to reduce time wasted tracking down problems related to differences in the environment in which your application runs.

Configuration as it relates to delivering your software really comes in two forms. The first of these is environment-neutral configuration. This type of configuration is necessary for the tool or software to work and doesn’t change when used in your development, testing, production, or any other environment. The second type is environment-specific configuration and is basically the opposite.

When your application is delivered into an environment, whether on a server somewhere or your users’  devices, troubleshooting configuration problems is much easier if all environment-specific configuration is in one place. The best way to do this is to create a table in a database, or a single file, that stores name/value pairs. For example the “ServerUrl” configuration setting might be set to “localhost” in the development environment. Where in production, it’s some domain name you probably purchased.

The problem with adopting this at first glance is that most tools have their own method of configuration, so to make this work you need to find a way to populate their configuration from this database or file. Do this with the following process:

  1. Create a table or file named “ConfigurationSettings” or “ApplicationSettings” for example, that holds the name/value pairs for environment-specific configuration. You can use nested pairs, or related tables if you need more complicated configuration.
  2. Create a build script for each environment (T-SQL, PSake, MSBuild, rake etc.) that populates the table or file with the values appropriate for it. If you have 4 environments, you will have 4 of these files or scripts.
  3. When you target a build at an environment, run the appropriate build script to overwrite the configuration values in that environment with the ones from the script. Note that I said overwrite, as you want to prevent people in the field from changing the configuration of your environment without doing a build. This is because configuration changes should be tested just like code.
  4. For each tool or asset that you want to configure, create a build script (PSake, MSBuild, rake etc.) that reads the values it needs by name from the table or file populated in step 3, and updates the configuration in the format needed. An example would be updating a web.config file’s XML data from the data in the table or file, or applying Active Directory permissions from the data in the table or file.
  5. Create a page, dialog, or view in your application that lists all of the data in the configuration table or file. This can be used by your personnel to easily see all the environment-specific configuration settings in one place.

This may seem like a hoop to jump through considering Microsoft and other vendors already provide environment-specific configuration files for some of their technologies, but I still encourage you to do this for the following reasons:

  1. When something goes wrong in one environment that works in another, it is much faster to look at a page with a flat list of configuration settings than to look in source control at a bunch of files or scripts that can be anywhere in your source tree.
  2. When environment-specific configuration is stored in source control as scripts, you have an audit trail of how those changes have occurred over time in the history of each file.
  3. Whenever you need a new environment, you can simply create a new script with data for that environment and you already have an automated means of populating the configuration mechanisms used by all of the tools and libraries you leverage.
  4. When you need to provide environment-specific configuration for a new technology, you can script setting it up and not worry about whether it supports environment specific methods out of the box.

Pay off your technical debt by preferring API clarity to generation efficiency

I’ve built the technical aspects of my career on combining technologies from Microsoft, that are easy to sell into enterprises that require the confidence that comes from their extensive support contacts and huge market footprint, with open source technologies that steer the direction of technology ahead of the enterprise curve – eventually to be embraced by them.

Microsoft has always provided powerful tools for developers in their Visual Studio product line. They focus on providing more features than any other vendor, and also having the flexibility to allows developers to design their software with the patterns that they find make the most sense to them. Because of this, the community is full of discussion, and there are always new ways to combine their technologies together to do similar things – but with quite a bit of variance on the architecture or patterns used to get them done. It can be daunting as a new developer, or a new member of a team, to comprehend some of the architectural works of art that are created by well-intentioned astronauts.

After I learned my first handful of programming languages, I began to notice the things that were different between each of them. These differences were not logic constructs, but rather how easy or difficult it could be to express the business problem at hand. Few will argue that a well designed domain model is easier to code against from a higher level layer in your application architecture than a direct API on top of the database – where persistence bleeds into the programming interface and durability concerns color the intent of the business logic.

In recent years domain specific languages have risen in popularity and are employed to great effect in open source projects, and are just starting to get embraced in Microsoft’s technology stack. A domain specific language is simply a programming interface (or API) for which the syntax used to program in it is optimized for expressing the problem it’s meant to solve. The result is not always pretty – sometimes the problem you’re trying to solve shouldn’t be a problem at all due to bad design. That aside, here are a few examples:

  • CSS – the syntax of CSS is optimized to express the assignment of styling to markup languages.
  • Rake/PSake – the syntax of these two DSLs are optimized to allow expressing of dependencies between buildable items and for creating deployment scripts that invoke operating system processes – typically command-line applications.
  • LINQ – The syntax of Language Integrated Query from Microsoft makes it easier to express relationship traversal and filtering operations from a .NET language such as C# or VB. Ironically, I’m of the opinion that LINQ syntax is a syntactically cumbersome way to express joining relationships and filtering appropriate for returning optimized sets of persisted data (where T-SQL shines). That’s not to say T-SQL is the best syntax – but that using an OO programming language to do so feels worse to me. However, I’d still consider its design intent that of a DSL.
  • Ruby – the ruby language itself has language constructs that make it dead simple to build DSLs on top of it, leading to its popularity and success in building niche APIs.
  • YAML – “Yet another markup language” is optimized for expressing nested sets of data, their attributes, and values. It’s not much different looking from JSON at first glance, but you’ll notice the efficiency when you use it more often on a real project if you’ve yet to have that experience.

Using a DSL leads to a higher cognitive retention of the syntax, which tends to lead to increased productivity, and a reduced need for tools. IntelliSense, code generation, and wizards can all cost orders of magnitude longer to use than to simply express the intended action using a DSL’s syntax when you’ve got the most commonly expressed statements memorized because the keyword and operator set it small and optimized within the context of one problem. This is especially apparent when you have to choose a code generator or wizard from a list of many other generators that are not related to the problem you’re trying to solve.

Because of this, it will reduce your cycle time to evaluate tools, APIs, and source code creation technologies based not on how much code your chosen IDE or command-line generator spits out, but rather the clarity in comprehension, and flexibility of that code once written. I am all for code generation (“rails g” is still the biggest game changer of a productivity enhancement for architectural consistency in any software tool I’ve used), but there is still the cost to maintain that code once generated.

Here are a few things to keep in mind when considering the technical cost and efficiency of an API in helping you deliver value to customers:

  • Is the number of keywords, operators, and constructs optimized for expressing the problem at hand?
  • Are the words used, the way they relate to each other when typed, and even the way they sound when read aloud easy to comprehend by someone trying to solve the problem the API is focused on? Related to this is to consider how easy it will be for someone else to comprehend code they didn’t write or generate.
  • Is there minimal bleed-over between the API and others that are focused on solving a different problem? Is the syntax really best to express the problem, or just an attempt at doing so with an existing language? You can usually tell if this isn’t the case if you find yourself using language constructs meant to solve a different problem to make it easier to read. A good example is “Fluent” APIs in C# or VB.NET. These use lambda expressions for property assignment, where the intent of a lambda is to enable a pipeline of code to modify a variable via separate functions. You can see the mismatch here in the funky syntax, and in observing the low comprehension of someone new to the concept without explanation.
  • Are there technologies available that make the API easy to test, but have a small to (highly preferred) nonexistent impact on the syntax itself? This is a big one for me, I hate using interfaces just to allow testability, when dependency injection or convention based mocking can do much better.
  • If generation is used to create the code, is it easy to reuse the generated code once it has been modified?

You’ll notice one consideration I didn’t include – how well it integrates with existing libraries. This is because a DSL shouldn’t need to – it should be designed from the ground up to either leverage that integration underneath the covers, or leave that concern to another DSL.

When you begin to include these considerations in evaluating a particular coding technology, it becomes obvious that the clarity and focus of an API is many times more important than the number of lines of code a wizard or generator can create to help you use it.

For a powerful example of this, create an ADO.NET DataSet and look at the code generated by it. I’ve seen teams spend hours trying to find ways to backdoor the generated code or figure out why it’s behaving strangely until they find someone created a partial class to do so and placed it somewhere non-intuitive in the project. The availability of Entity Framework code first is also a nod towards the importance of comprehension and a focused syntax over generation.

Why continuously deliver software?

Since I adjusted the focus of my subject matter on this blog over the past couple of weeks, one of the main subjects I’ve been talking about is continuous delivery. This is a term coined in a book by the same name. I’m attempting to summarize some of the concepts in the book, and putting an emphasis on how the practices described in it can be applied to development processes that are in trouble. I’ll also discuss specific technologies in the Microsoft and Ruby community that can be used to implement them.

If you really want to understand this concept, I can’t overemphasize the importance of reading the book. While I love blogs for finding a specific answer to a problem or getting a high level overview of a topic, if you are in a position to enact change in your project or organization it really pays to read the entire thing. It took me odd hours over a week to read and I purchased the Kindle version so I can highlight the important points and have it available to my mobile phone and browsers.

That being said, I want to use this post to dispel what continuous delivery is not, and why you would use it in the first place.

Continuous delivery is not

  • Using a continuous integration server (Team Foundation Server, CruiseControl.NET, etc.)
  • Using a deployment script
  • Using tools from Microsoft or others to deploy your app into an environment

Rather, the simplest description I can think of for this concept is this.

“Continuous delivery is a set of guidelines and technologies that when employed fully, enable a project or organization to delivery quality software with new features in as short a time as possible.”

Continuous delivery is

  • Requiring tasks to have a business case before they are acted upon
  • Unifying all personnel related to software development (including operations) and making them all responsible for delivery
  • Making it harder for personnel to cut corners on quality
  • Using a software pattern known as a “delivery pipeline” to deliver software into production
  • Delicate improvements to the process used for testing, configuration, and dependency management to eliminate releasing low quality software and make it easy to troubleshoot problems

I’ll continue to blog about this and I still encourage you to read the book, but one thing that really needs to be spelled out is why you would want to do this in the first place. There are several reasons I can think of that might not be immediately apparent unless you extract them out of the bounty of knowledge in the text.

Why continuously deliver software?

When personnel consider their work done but it is not available to users:

  • That work costs money and effort to store and maintain, without providing any value.
  • You are taking a risk that the market or technologies may change between when the work was originally desired and when it is actually available.
  • Non-technical stakeholders on the project cannot verify that “completed” features actually work.

When you can reduce the time it takes to go from an idea to delivering it to your users:

  • You get opportunities for feedback more often, and your organization appears more responsive to its customers.
  • It increases confidence in delivering on innovation.
  • It eliminates the need to maintain hotfix and minor revision branches since you can deliver fixes just as easily as part of your next release.
  • It forces personnel to focus on quality and estimating effort that can be delivered, instead of maximum work units that look good on a schedule.

And lastly: when personnel must deliver their work to users before it can be considered done, it forces the organization to reduce the amount of new functionality they expect in each release; and to instead trade volume for quality and availability.