IT deployment standards: why now?

Much like what is happening in the “big data” space, challenges with interoperability in the IT deployment tool market are converging to create an opportunity for standardization. Before I go any further, let me introduce two terms that have no official meaning in the industry but will make this article easier for you to read:

Deployment Providers – These are command-line utilities, scripts, or other self-contained processes that can be “kicked off” to deploy some single aspect of an overall solution. An example might be a code compiler, a command-line utility that executes database scripts, or a script that provisions a new node in VMWare or up on Amazon Web Services.

Deployment Orchestrators – These are higher level tools that oversee the coordinated execution of the providers. They often include the ability to add custom logic to the overall flow of a deployment and combine multiple providers’ activities together to provide an aggregate view of the state of a complex solution’s deployment. Some open source examples of these are Puppet, Chef and Powerdelivery; and there are many commercial solutions as well.

With those two definitions out of the way, let me begin by describing the problem.

Do we really need deployment standards?

First, as more companies are finding that they are unable to retain market leadership without increasing release frequency and stability, having deployment technologies that differ wildly in operation as personnel move between teams (or jobs) creates a significant loss in productivity. Much like ruby on rails disrupted the web application development market by demonstrating that providing a consistent directory structure and set of naming conventions leads to less variability in software architecture and reduced staff on-boarding costs, so too is the IT deployment space in need of solutions for reducing complexity in the face of economic pressure. As developers and operations personnel collaborate to better deliver value to their customers, having a consistent minimum level of functionality necessary for Deployment Orchestrators to deliver the technology stack they’ve chosen provides the opportunity to reduce IT delivery costs that impact both of these job roles.

Secondly, as the number of technologies that need to be deployed increase exponentially, more organizations are struggling with having a means to get the same quality of deployment capabilities from each of them. While Deployment Orchestrators often include off-the-shelf modules or plugins for deploying various assets, the external teams (often open source contributors or commercial vendors) that create the Deployment Providers rarely have the financial incentive to ensure their continued compatibility with them as they change. This places the burden of compatibility on the Deployment Orchestrator vendor. These vendors must then constantly revise their modules or plugins to support changes originating from the downstream teams that deliver the Deployment Providers, and this coordination will grow increasingly difficult as the technologies that they deploy are also released more frequently through industry adoption of Continuous Delivery.

Standards create their own problems

Whenever one speaks of standardization, there are a number of legitimate concerns. If history repeats itself (and it often does), the leading Deployment Orchestration vendors will lobby hard to have any emerging standards align best with their product’s capabilities – and not necessarily with what is right for the industry. Additionally, as standards are adopted, they must often be broken for innovation to occur. Lastly a committed team of individuals with neutral loyalties to any vendor and a similar financial incentive must be in place to ensure the standard is revised quickly enough to keep pace with the changes needed by the industry.

Scope of beneficial IT deployment standardization

What aspects of IT deployment might benefit from a standard? Below is a list of some that I already see an urgent need for in many client deployments.

Standard format for summary-level status and diagnostics reporting

When deployment occurs, some Deployment Providers have better output than others, and having a standard format in which these tools can emit at least summary-level status or diagnostics that can be picked up by Deployment Orchestrators would be advantageous. Today most Deployment Providers’ deployment activities involve scripting and executing command-line utilities that perform the deployment. These utilities often generate log files, but Deployment Orchestrators must understand the unique format of the log file for every Deployment Provider that was executed to provide an aggregate view of the deployment.

If Deployment Providers could generate standard status files (in addition to their own custom detailed logging elsewhere) that contain at least the computing nodes upon which they took action, a summary of the activities that occurred there, and links to more detailed logs this would enable Deployment Orchestrators to render the files in the format of their choice and enable deep linking between summary level and detailed diagnostics across Deployment Providers.

More Deployment Orchestrators are beginning to provide this capability, but they must invest significantly to adapt the many varying formats of logs into something readable where insight can occur without having to see every detail. A standard with low friction required to adhere would encourage emerging technologies to support this as they first arrive in the market, lowering the burden of surfacing information on Deployment Orchestration vendors so they can invest in delivering features more innovative than simply integration.

Standard API for deployment activity status

When Deployment Providers are invoked to do their work, they often return status codes that indicate to the Deployment Orchestrator whether the operation was successful. Deployment Providers don’t always use the same codes to mean the same thing, and some will return a code known to indicate success but the only way that a problem is determined is by parsing a log file. Having a consistent set of status codes and being able to certify a Deployment Provider’s compatibility would improve the experience for operations personnel and reduce the complexity of troubleshooting failures.

Standard API for rolling back a deployment

Deployment in an enterprise or large cloud-hosted product often involves repeating activities across many computing nodes (physical or virtual). If a failure occurs on one node, in some situations it may be fine to simply indicate that this node should be marked inactive until the problem is resolved, while in other situations the deployment activities already performed on prior nodes should be rolled back.

Since rollbacks of deployments are typically executed in reverse-order and need to provide each Deployment Provider with state information needed to know what is being rolled back, today’s Deployment Orchestration vendors don’t make it easy to do this and significant investment in scripting is necessary to make it happen. A standard API that Deployment Providers could support to allow them to receive notification and state when a rollback has occurred, and then take the appropriate action would allow for a consistent behavior across more industry solutions and teams.

Standard API for cataloging, storing, and retrieving assets

When a failed rollback occurs, Deployment Orchestrators must have access to assets like compiled libraries, deployment packages, and scripts that were generated during a prior deployment to properly revert a node to its previous state. Additionally, it is often desirable to scale out an existing deployment onto additional nodes after the initial deployment already completed to meet additional capacity demands.

Depending on the number and size of the assets to store, number of revisions to retain, and the performance needs of the solution; anything from a simple network share to dedicated cloud storage might fit the bill. A standard API abstracting the underlying storage that Deployment Providers can use to store their assets and Deployment Orchestrators can use to retrieve the correct versions would enable organizations to select storage that meets their operational and financial constraints without being locked in to a single vendor’s solution.

Standard API for accessing a credential repository

In addition to Deployment Providers needing to publish assets as a result of deployment activities, they also often need to access security credentials that have been given permission to modify the infrastructure and components that are configured and deployed to when they execute. On Linux and OSX this is often provided by SSH keys, while on Windows a combination of Kerberos and (in double-hop scenarios through Windows PowerShell) CredSSP. Rather than each deployment needing to implement a custom method for locating keys on the correct servers, a credential repository where these keys can be stored and then securely accessed by trusted nodes would simplify the management of security configuration needed for Deployment Providers to do their work with the right auditability.

Summary

With these standards in place, Deployment Orchestration vendors still have plenty of room to innovate in their visualization of the state of nodes in a delivery pipeline, rendering of status in their own style and format for the devices they wish their users to support, and the coordination of these APIs to provide performant solutions across the Deployment Providers that would adopt such a standard. As a team bringing a new Deployment Provider to market, it would improve the adoption of the technology by having the option of adhering to a standard that would ensure it can be orchestrated consistently with the rest of the solution for which their technology is but one part. When standard APIs such as these are made available by a single Deployment Orchestration vendor without first having been established as at least a proposed standard, it is very difficult to motivate the broader community that creates Deployment Providers to do any work to support proprietary APIs themselves.

What do you think? Has the industry tried various aspects of this before outside of a single vendor? Has the market matured since then where the general audience might now see the value in this capability? Would it be too difficult to provide an API in enough languages and formats that orchestration could occur regardless of the platform? Let me know your comments below!

Photo: “lost legos 1/3” © sharyn morrow creative commons 2.0

Selecting a Deployment Pipeline: 10 Considerations

While a team must adopt a customer-guided, acceptance criteria-driven culture to begin releasing IT assets frequently and with high quality, eventually an investment needs to be made to create a Deployment Pipeline. Put simply, this is a technology that enables an organization to use automation to eliminate manual processes necessary in the release of their IT assets. The industry is full of automation tools and many organizations will need to use a combination of several to achieve one-click release of assets into an environment, enabling Continuous Delivery.

In no particular order follows a list of considerations I encourage you to take into account when selecting technologies for building your pipeline. These considerations are rooted in guidance outlined in the Continuous Delivery book.

#1 – Does it work with your source control systems?

When engineers check-in IT assets to a source control system or some other asset repository, the delivery pipeline should automatically trigger some actions. These actions might be things like compiling code, running unit tests, configuring servers and deploying assets. If you have one team that uses Github and another that uses Team Foundation Server and both teams’ deliverables are combined to produce the final product – you don’t want to get into a situation where someone needs to manually push and pull code between the two repositories.

If one deliverable is a dependency of the other, create a pipeline where the production environment of the first team’s deliverable is a located someplace where the dependent team can consume it. This might be a UNC drive, rubygems, NuGet, or Chocolatey repository (public or private), or a host of other options. This allows the first team to make changes in their own development environment, do their own acceptance testing, and then choose when they feel they have a build solid enough to push to the other team. The dependent team can then move their own assets and the associated dependent team’s assets through their own pipeline and eventually into the production environment where users will consume it. This technique is sometimes known as a cascading deployment pipeline.

#2 – Does it move assets between environments without recompilation?

One important principle of Continuous Delivery is to never deploy different binaries to different environments. If you compile or generate assets that are deployed to production differently than the copy in the UAT environment, you open your team up to an unnecessary risk of defects. Your deployment pipeline should be able to deploy the same assets to multiple environments with the only difference being their configuration.

#3 – Is it approachable by operations personnel?

To achieve the organizational alignment of bringing development and operations personnel together through DevOps, operations personnel will assist in the creation of and ongoing change to the delivery pipeline to support the value needed by the IT offering. If your operations personnel are not object orientated programmers well versed in polymorphism, domain-driven design, functional programming and other product development skills; you will have a challenging time getting them to adopt and contribute to the pipeline if those are the skills needed to work with it.

I recommend considering the selection of automation technologies that use scripting languages such as Windows PowerShell, bash, or ruby to automate. These languages have a lower barrier to entry, can execute without needing to be compiled first, and don’t require purchase of an expensive Integrated Development Environment (IDE) to modify. A good list of powerful tools that are built on these languages to start investigating might be Puppet, Chef, Capistrano, Rake, Powerdelivery, and PSake. I can’t honestly recommend MSBuild, Maven, or Ant for the pipeline (though they are fine for compiling things) because XML tends to be too fragile and difficult to comprehend for building even the simplest of conditional logic.

#4 – Are deployment assets versioned along with deliverables?

Because the deployment pipeline will be undergoing change, it will contain configuration and logic meant to deploy the assets of a release at a given moment in time. To enable rollback (I’ll talk more about this later) and reduce surprises, it is preferable to find a way to version the pipeline’s actions along with your deliverables. This is fairly easy to do with most of the technologies I cited above, but there are some other tools on the market that don’t lend themselves well to this important tenet.

#5 – Is it possible to write rollback procedures should aspects of the deployment fail?

Automating the “backing out” of changes made during a deployment, also known as a rollback, is an important step in gaining the confidence necessary to release to customers more frequently. This is the process of returning a deployment to its prior state if a failure somewhere along the way occurs. If your rollback process is manual, you introduce the chance of errors and troubleshooting an improperly performed rollback while the production system is down can cause immense levels of stress.

Look for technologies that keep track of actions performed and if there is a failure, either have “hooks” to let you write your own rollback logic or perform the rollback actions (in reverse order of when deployed) automatically. There’s a good chance you will need to create some of the rollback functionality yourself.

#6 – Does it prevent skipping of environments when deploying release candidates?

The reason teams create dedicated development, staging (or user acceptance), and production environments is to isolate changes being made from their customers and staff performing acceptance testing. One of the highest costs in deployment that kills a team’s ability to release more frequently is when the configuration and deployment of assets to these environments is manual. This is because it is simply too easy for a person to forget to make a change consistently in all environments.

Look for a technology that keeps track of which releases have been deployed to which environments, and will prevent a build that has not been tested from being pushed into production, for example. This is one of the reasons I created powerdelivery, because there isn’t a tool in the Microsoft ecosystem for putting quality in gates into automated deployments between environments, and the Windows Workflow Foundation technology upon which automated builds in TFS were created is too unwieldy and slow to keep up with the needs of a team that is frequently releasing.

#7 – Is it possible to use security to control who can initiate deployments into each environment?

Most teams put a policy in place that determines who is allowed to deploy releases into each environment. This is still a good practice, and should be agreed upon and communicated when planning to begin continuously delivering any IT offering. To help meet Sarbanes-Oxley (SOX) regulatory compliance and prevent mistakes, it is often not enough. Look for technologies that allow configuration of who can promote a build from development to test (often a QA resource) and from there into production (often an operations resource).

#8 – Does the technology provide access to compilers, deployers, and test frameworks you use?

You may have read some great story about how a darling start-up in silicon valley implemented Continuous Delivery using javascript, but if you can’t use it to compile, deploy, and test YOUR code you will end up fighting it. Look for technologies that include support out of the box for the tools and frameworks you use, or have a low barrier of entry to integrating them in.

#9 – Does the technology provide access to infrastructure automation for the on premise or cloud platforms you use?

Continuous Delivery is more than automated deployment – you also need to be able to spin up servers or virtual machines on premise and in the cloud depending on your requirements. If you are using Amazon Web Services, Windows Azure, Heroku, or any of the other popular hosting platforms; make sure your pipeline technology can utilize the best APIs from these vendors. If your technology allows you to deploy a website or mobile application but not harness the rich features of your hosting platform for example, you might be missing out on the ability to automate your entire infrastructure and benefit from things like automatic scaling of your application based on demand.

#10 – Is it possible to replay activities consistently across multiple nodes?

As more IT offerings are able to scale out their processing and storage across multiple physical or virtual computing nodes, deployment technologies that enable Continuous Delivery need to have first class support for parallel activities. If your ASP.NET MVC or Ruby on Rails website will be hosted on a cluster of nodes, look for a technology that can accept the DNS names or I.P. addresses of these nodes and perform deployments seamlessly across them, rolling back the nodes that were deployed to if one fails. If your projects are automatically compiled every time a developer checks in code but the deployment process requires repeating manual steps across multiple nodes – you are in for a long road of troubleshooting configuration mismatches and surely will have a way to go before you can reduce your release cycle time to be more agile in your market.

Hopefully this list of considerations helps your team think objectively about some of the hype out there around deployment pipeline technologies, and you can better take into account some of the additional costs of development and maintenance necessary to get them to work for your particular IT offerings.

Have you read the Continuous Delivery book or are you starting your journey towards reduced cycle times? Let me know some of the struggles and successes you’ve had around deployment pipeline selection and implementation below!

Tips for Estimating Continuous Delivery

Ahh, estimating. We all hate to do it, but it’s critical when releasing more often. The good news is it’s not nearly as important to get it “just right” for an entire effort, but it is important that the current sprint is estimated well. The following article should help you understand what’s different about estimating a sprint (release) when you use Continuous Delivery as apposed to using a SCRUM/Agile process where the work is done in small iterations but not actually released.

Because delivering IT assets is a highly variable process (if it weren’t every task would take the same time just like creating parts on an assembly line), it is highly likely that one or more estimates will be off by some amount. When your team starts a sprint of work using Continuous Delivery, the goal is to actually release something at the end. Because of this focus, we need a way to accommodate variability.

The problem with “buffers” in tasks

Most folks remedy variability by putting “buffer time” into their estimates to account for unknowns. If you put buffers into tasks, there are two problems that occur. The first is that psychologically, those assigned to the tasks have a misplaced sense of “extra time” left to complete their tasks. You do not want individual tasks to be scheduled to take any longer than what’s expected. If your team doesn’t understand a user story well enough to describe verifiable acceptance criteria to demonstrate it works, don’t put it in the sprint! If you do this, you are making it highly likely that you won’t release at the end.

The second problem with putting buffers into individual tasks is that you have just scheduled all of your tasks to essentially be as costly as they can be. You can’t track to a realistic estimate and are essentially planning on each task taking as long as possible. I can’t stress this enough – don’t use buffers in task estimates for sprints when you want to release the work at the end.

Why high capacity utilization is dangerous

Because the goal of most IT value chains is to deliver value as efficiently as possible, it can be tempting to load resources to high capacity utilization. Because we don’t want buffers in our tasks but the work is highly variable, the proper place to accommodate this is by scheduling resources with excess capacity. By leaving excess capacity, resources that finish tasks early are freed up to help others with theirs and it is more likely that having multiple people working on a problem will still get it finished on time than asking a single, highly utilized resource to “just work faster”. The risk of any one member blocking the entire release by variability encountered in getting their task done goes up exponentially as you pass 70% utilization. If you don’t leave extra capacity for resources, you are all but ensuring that you will not be able to release the work at the end.

Sprint “Zero”

The first sprint of an effort that will employ automated continuous releases is the place to estimate and complete tasks that are hard to do once the entire team has started work. The following is a list of some things to consider scheduling for the first sprint (or several depending on the size of the effort to be worked on) to prepare for the entire team coming on board.

  1. Time to setup infrastructure like servers, security accounts, and opening of ports
  2. Time to setup source control for any change controlled assets
  3. Time to build an initial website, database, business intelligence data mart, mobile application, or whatever you will be delivering that is either empty or has just one page or set of data
  4. Time to create an automated deployment process for deploying changes through your various environments (development, UAT, production etc.)
  5. Time to create a policy for who will approve release candidates to be promoted from one environment to another
  6. Time to populate and prioritize the backlog and do business analysis necessary for agreeing on the acceptance criteria for the first sprint where the team will work on functional deliverables

Functional Sprints

Once the larger effort gets underway and the team is brought in, the list below includes some things you should consider when estimating work that will be released at the end of the sprint. This is by no means a comprehensive list but includes some things you might normally put off until the end of a larger “iterative waterfall” project where the team builds functionality in sprints but doesn’t release it until a stabilization phase at the end.

  1. Time to implement new features or changes
  2. Time to test features or changes
  3. Time to fix any defects found during testing
  4. Time to change anything about the automated deployment or testing process needed to support the features being delivered
  5. Time to update user documentation that describes the new features or changes
  6. Time to hold the sprint review meeting, accommodate feedback, and release to production
  7. Time to hold a sprint retrospective to capture what went well and what actions will be taken to remedy problems found

Hopefully this article helps you to consider some of the changes needed when estimating work to release more frequently. The team will complete less functional volume than you might be used to, but the trade-off is that at the end – you can release it. As a parallel effort to the ongoing sprints of releasing new IT offering functionality, Product Management can work with early adopters of the new system or customers to integrate their feedback into the backlog to make adjustments keeping the team’s efforts in line with their needs.

Top 5 business myths about Continuous Delivery

When a team decides to try reducing the time it takes for their ideas to get to their customers (cycle time), there are a few new technical investments that must be made. However, without business stakeholders supporting the changes in a SCRUM approach that delivers frequent releases, decisions and planning are driven by gut feel and not quantifiable outcomes. The following is a list of the top 5 myths I encounter (and often address when I provide coaching) to help staff that are not solely technically-focused when they begin adopting Continuous Delivery.

#5: By automating deployment, we will release more profitable ideas.

Automating deployment of IT assets to reduce low value activities like manual configuration and deployment (with risky error-prone manual human intervention) certainly can eliminate wasted capital as part of the process of releasing IT offerings, and is a key practice of Continuous Delivery. However, if the frequency of releases is long, the cost of delaying the availability of those releases to customers adds risk in that their viability in the market may no longer be what was theorized when it was planned.

Once release frequencies are improved, measurement of customer impact and proper work management (specifically appropriate capacity planning and calculating the cost of delay for potential features) must be done to ensure that ideas that turn out to be misses in the market stop stop being worked on as soon as they are identified as bad investments. It is this harmony of smart economic decisions with respect to investing in the idea combined with the technical benefits of building an automated deployment pipeline that transforms the profitability of an IT value chain.

#4: We must automate 100% of our testing to have confidence in automating releases to production

Utilizing automated quality checks to ensure that changes to IT assets do not break existing functionality or dependent systems is certainly a good practice. A long manual test cycle is doubly problematic: it delays releases and adds risk since many teams try to get started on new work while testing is underway. When issues are found with a release candidate build or package being tested, engineers must stop what they are doing to troubleshoot and attempt to fix the problems.

On the flip side, automating the entire testing effort has its own risks as the team can cost the business large sums by having to change and maintain tests when they make changes to the design which happens frequently in Continuous Delivery. Deciding on an appropriate test coverage metric and philosophy should be treated with importance and not included in work estimates as separate line items to discourage removal in an attempt to cut costs. Cutting quality is often the final dagger in the throat of a struggling IT offering.

#3: The CFO requires us to release everything in the backlog to report costs

Many businesses treat IT investments as capital expenditures since they can take advantage of amortization and depreciation to spread the cost of that investment over a longer time period. However, this assumes that the value in the investment provides a consistent return over the lifetime of it being used to generate revenue. A SCRUM process for delivering IT assets aligns better with being recorded as operating expenditures since a minimum viable offering is typically released with a low initial investment in the first few sprints, and the business makes ongoing “maintenance” changes to the offering as the priorities of the market and customer needs change. This is especially true today with everything moving increasingly to cloud based models for value consumption.

#2: We need a “rockstar” in each role to deliver profitable offerings

Many IT offerings that start with an idea are initially implemented with an expert in a particular technology or aspect of delivery, and the team leans on them early on for implementation and expertise. As the complexity of a solution expands, the biggest drain on the profitability of a team is no longer the availability of experts and the high utilization of people’s time – it is the time work to be completed spends waiting in queues. There are several ways to reduce wait time when work with a high cost of delay is held up in a queue. The two methods I see with the most value are to reduce the capacity utilization of team members, and to enable staff to work on more than one discipline.

When team members are highly utilized (their planned capacity is over 60%) this leaves no room for the highly-variable process of delivering IT offerings to account for unknowns that were not identified during planning or design of a cycle of implementation. If the cost of delaying the availability of an idea is high, the cost increases when the date planned for release is missed. Rather than loading resources up to a high capacity, leave them with reasonable overhead to collaborate, tackle unforeseen challenges, and help each other if they finish early.

When team members are specialized, the probability of one member being blocked from continuing by another goes up dramatically. Work to be completed spends more time in a queue wasting money and not moving closer to being made available to customers so it can realize a return. Though you will always have team members that have expertise in specific areas, resources that are willing to test, make informed product priority decisions, and help with deployment and automation are more valuable as part of an IT value stream than specialists. Use specialists when the work requires it, but scale more of your resources out across multiple disciplines for sustainability.

#1: Until we release everything in the backlog, we won’t succeed with our customers

This myth is driven by the manufacturing mindset of looking at IT offering delivery as though all features must be identified up front and misses the point of agile methods entirely. The backlog is a set of theories on what customers will find to be valuable at any given point in time. Any offering that takes more than one release to complete will have a working minimum viable product available to some audience where feedback can be gathered before it’s done.

Since the point of frequent releases is to get that feedback and let it impact the direction of the IT offering, planning to release everything in the backlog leaves no capacity for taking action on that feedback. If you only plan to release everything the business thinks is a good idea at the beginning of a project before letting customer feedback influence priorities, you are simply releasing milestones of planned up-front work – which is a classic waterfall delivery process.

Powerdelivery extension for Visual Studio 2013 released

I’ve released a version of the Visual Studio extension for using powerdelivery on Chocolatey. This extension allows you to connect to Team Foundation Servers, see which builds are present in each of your environments (Dev, Test, Production etc.), open the appropriate scripts and config files for developing the pipeline, and kickoff builds.

To install it, have Powerdelivery and Visual Studio 2013 already installed. Then run the following from the command prompt (with Visual Studio closed):

cinst PowerDelivery-VSExtension-2013

Remember you can also install this extension for Visual Studio 2010 or 2012 as well using the appropriate package.

New online help, Visual Studio extensions, and getting started video for powerdelivery

It’s been just shy of a year since I made my first commit to github to start the powerdelivery project. For those of you new to my blog, powerdelivery is a free toolkit that extends Microsoft Team Foundation Server enabling your development and IT operations teams to continuously deliver releases of your IT assets (software, CMS systems, BI solutions etc.) to your customers. It uses Windows PowerShell and YAML (yet another markup language) to provide a configuration-driven platform and can even do scaled deployment of windows services and web sites across farms and clusters.

If you’ve followed the project before but yet to use it, there have been many improvements made and defects fixed in the past month and I’ve got some exciting resources that are becoming available to make it easier to use and even more powerful.

New online help for powerdelivery

Over the past couple of months, I’ve created a basic site using twitter bootstrap to host the documentation for powerdelivery on github pages. The site is a major improvement over the wiki and has everything you need to begin using it. You can find the new online help page right here.

Visual Studio Extensions for 2010 and 2012

I’ve also created an extension for Visual Studio that allows you to view the status of each of your environments, promote builds, and add new deployment pipelines to existing Microsoft Team Foundation Server projects. The extension still has a few quirks but is stable for the most part and ready for use. You can install it from chocolatey here.

Getting started video

Lastly, I’ve created a getting started video that shows you how to get powerdelivery installed and configured, and to create your first pipeline. This video alone will not be enough to build your continuous delivery pipeline, but it’s a great walkthrough of the Visual Studio extension and what powerdelivery gives you. You can watch the video below via YouTube (watch on YouTube for HD).

Getting started with powerdelivery 2.3

PowerDelivery 2.0.8 adds integrated PowerShell help

Get it off chocolatey here. See the updated cmdlet reference wiki topic for details. This should make it easier to work with powerdelivery in the ISE or PowerGUI, as well as the prompt itself. Enjoy!

Powerdelivery 2.0.6 adds pretty 2012 summary page, delivery module API, and build asset cmdlets

I’ve just pushed out another update to powerdelivery. You can get it on chocolatey (‘cup powerdelivery’ if it was already installed ‘cinst powerdelivery if not’).

Pretty Build Summary on TFS 2012

Behold the new summary page when you run your builds in TFS 2012:

Powerdelivery_summary

The output you see above is included by default without any code needed on your part. You can however write your own messages into these sections using the new Write-BuildSummaryMessage cmdlet. This cmdlet only works on TFS 2012. When you specify the “name” parameter of the cmdlet, pass the name of a delivery pipeline code block (“Init”, “Commit”, or “Deploy” for example) to have your message show up in that section.

Delivery Module API

This is a feature I’ve been wanting to add for a while now. One of the nice things about powerdelivery is that instead of having to learn all the moving parts in Windows Workflow foundation, you just have a PowerShell script and CSV files. However, there are some things you might do over and over again in your script that you want to drive with configuration to eliminate having to place code in your script for them. Examples are deploying databases, websites, etc. There are already cmdlets in powerdelivery to do this, but I’ve introduced a new Delivery Module API that can be used to create even more powerful reusable modules for delivering different types of assets.

To create a delivery module, you need to create a regular simple PowerShell module following any of the instructions you’ll find on the web and make sure it’s available to your script (putting the folder that contains it in your PSMODULEPATH system environment variable is easiest). Pick a name that’s unique and from then on you may use that module in your scripts by using the new Import-DeliveryModule cmdlet. If you call this cmdlet by passing “MSBuild” for example, powerdelivery will try and invoke functions that match this name following the convention below:

Invoke-<ModuleName>DeliveryModule<Stage>

Where “ModuleName” might be “MSBuild” and “Stage” might be “PreCompile” (for example, Invoke-MSBuildDeliveryModulePreCompile). You can name the functions in your module starting with “Pre” or “Post” to have them run before or after the actual “Compile” function in the build script that imports the module.

If this function is found it is invoked before or after the pipeline stage specified, and you can do anything you want in this function. A common use would be to look for CSV files with values you can pass to existing functions. I’ve included an initial MSBuild delivery module as an example of how to do this. It looks for a file named “MSBuild.csv” and if it finds it, will build any projects using the settings in that file.

I’m still working on this API so any feedback is appreciated.

Build Asset Cmdlets

A common operation in your build script is copying files from the current “working” directory of your build on the TFS build agent server out to the “drop location” which is typically a UNC path. To make this easier I’ve added the Publish-Assets and Get-Assets cmdlets which are quite simple and take a source and destination path. These cmdlets will push and pull between our local working directory and the drop location without having to specify a full path and should simplify your script. The build summary page on TFS will display any assets you publish.

Coming Soon

The next two things I will be completing is the integrated help for all included cmdlets (so you can get syntax help in PowerShell instead of on the wiki), and updating the existing templates other than the Blank one to match the new syntax in the 2.0 release. I’ll then be working on getting more templates and delivery modules put together to help you hit the ground running with your delivery automation.

Powerdelivery 2.0 released on chocolatey

There’s never been a better time to get started looking at continuous delivery with PowerShell and TFS than today. Over the weekend I redesigned powerdelivery to have a cleaner syntax, and allow for quicker installation using chocolatey (a system-wide package manager for Windows based on nuget). This topic describes the redesign as well as how to upgrade older (version 1.0) powerdelivery projects.

What’s new?

The section below describes what’s been changed in powerdelivery 2.

Cleaner build script format

Powerdelivery 1.0 required declaring of script parameters, declaring an app name and version as variables, and adding functions for the stages of your deployment pipeline you want to support. Version 2.0 refines this dramatically by allowing you to invoke the Pipeline function at the top of your script to set the name and version, and then declare code blocks into which the code that executes in each stage goes.

Take a look at the default build template on github for an example of a the new blank script format.

Install with chocolatey

Install chocolatey following the instructions in the website, and then open an administrative command prompt. To install powerdelivery enter the following:

cinst powerdelivery

As new releases come out, you now only need to re-run this command to get the latest package setup and configured for use by your build server (or locally, for local builds). Note that this method of installation also has the benefit of providing intellisense in the PowerShell ISE or PowerGUI for all of the included powerdelivery cmdlets.

Environment targets secured by TFS

When you use the Add-Pipeline cmdlet to add powerdelivery to your TFS project, a security group is created that users must be placed in to trigger/queue non-commit builds. This allows IT to control who is allowed to promote builds to the test and production environment and will fail the build with an appropriate error message if you try and circumvent this.

Target TFS version auto-detected

With powerdelivery 1.0 you had to tell the Add-Pipeline cmdlet whether the target TFS server was running version 2010 or 2012. Version 2 detects this automatically and will configure the server appropriately without requiring you to specify anything.

Pipeline promotion order enforced

In powerdelivery 1.0 it was possible (though frowned upon!) to promote a commit build to production, or a test build to commit. This certainly was not intended though the limited audience using it was able to control this.

Version 2 enforces the pipeline promotion order as follows:

  • Test, UAT, and CapacityTest environment builds must be promoted from a Commit build
  • Production environment builds must be promoted from a Test build

Upgrading to Version 2

Upgrading to version 2 of powerdelivery is fairly straightforward.

  1. Follow the instructions on the website for installing powerdelivery using chocolatey on your build server.
  2. Refactor your build script to follow the new format. This should be self explanatory looking at the blank template. A key change is to try and use script instead of global scoped variables in your Init block.
  3. Locate the directory powerdelivery is installed into (for example, C:\Chocolatey\lib\powerdelivery.XXX). Find the files in the “BuildProcessTemplates” directory and upload these to your TFS project. These are the updated build process templates needed by version 2. We’re working on adding a switch to the Add-Pipeline cmdlet to allow you to do this automatically in the future.
  4. Remove the PowerShellModules subdirectory of your TFS project. This is no longer necessary as powerdelivery is now loaded as a module via chocolatey. If you had any other modules you were using other than powerdelivery in this directory, either install them as system-wide modules or add the following line to the top of your script:

    $env:PSModulePath += “;.\PowerShellModules”

    IMPORTANT: Remember to delete the “Powerdelivery” subdirectory of PowerShellModules if you choose to keep this directory.

  5. You may need to have your TFS administrator add users that were able to queue test and production builds to the security groups controlling who can build to each environment. If you don’t have appropriate permissions to build to an environment you will get an error message during the build that should be self explanatory for helping you figure out which group to add the user to.

powerdelivery now supports Microsoft Team Foundation Server 2012

The open source powerdelivery project I’ve been blogging about using to enable continuous delivery automation for your software projects has now been updated to support TFS 2012.

Creating new projects using TFS 2012 powerdelivery

To add powerdelivery to a TFS 2012 project, just pass the –tfsVersion parameter to the AddPipeline utility specifying 2012. For instance:

    .\AddPipeline.ps1 (other parameters) –tfsVersion 2012

If you are using Visual Studio 2012 as well for your developers, you will probably also want to pass the –vsVersion parameter and specify 11.0 (10.0 is the default, which is Visual Studio 2010).

Upgrading existing projects to TFS 2012 powerdelivery

If you were using TFS 2010 and want to upgrade your server to 2012, to upgrade your powerdelivery-enabled projects to use the 2012 configuration, grab the latest zip from master and add the following files to source control from the source:

BuildProcessTemplates\PowerDeliveryTemplate.11.xaml
BuildProcessTemplates\PowerDeliveryChangeSetTemplate.11.xaml

Lastly, you will need to edit the Build Process Template for your “Commit” build and set it to use the PowerDeliveryTemplate11.xaml file (instead of PowerDeliveryTemplate.xaml) and edit the rest of your builds (Test, CapacityTest, Production) to set them to use PowerDeliveryChangeSetTemplate11.xaml.

Happy continuously delivering!

%d bloggers like this: