While a team must adopt a customer-guided, acceptance criteria-driven culture to begin releasing IT assets frequently and with high quality, eventually an investment needs to be made to create a Deployment Pipeline. Put simply, this is a technology that enables an organization to use automation to eliminate manual processes necessary in the release of their IT assets. The industry is full of automation tools and many organizations will need to use a combination of several to achieve one-click release of assets into an environment, enabling Continuous Delivery.

In no particular order follows a list of considerations I encourage you to take into account when selecting technologies for building your pipeline. These considerations are rooted in guidance outlined in the Continuous Delivery book.

#1 – Does it work with your source control systems?

When engineers check-in IT assets to a source control system or some other asset repository, the delivery pipeline should automatically trigger some actions. These actions might be things like compiling code, running unit tests, configuring servers and deploying assets. If you have one team that uses Github and another that uses Team Foundation Server and both teams’ deliverables are combined to produce the final product – you don’t want to get into a situation where someone needs to manually push and pull code between the two repositories.

If one deliverable is a dependency of the other, create a pipeline where the production environment of the first team’s deliverable is a located someplace where the dependent team can consume it. This might be a UNC drive, rubygems, NuGet, or Chocolatey repository (public or private), or a host of other options. This allows the first team to make changes in their own development environment, do their own acceptance testing, and then choose when they feel they have a build solid enough to push to the other team. The dependent team can then move their own assets and the associated dependent team’s assets through their own pipeline and eventually into the production environment where users will consume it. This technique is sometimes known as a cascading deployment pipeline.

#2 – Does it move assets between environments without recompilation?

One important principle of Continuous Delivery is to never deploy different binaries to different environments. If you compile or generate assets that are deployed to production differently than the copy in the UAT environment, you open your team up to an unnecessary risk of defects. Your deployment pipeline should be able to deploy the same assets to multiple environments with the only difference being their configuration.

#3 – Is it approachable by operations personnel?

To achieve the organizational alignment of bringing development and operations personnel together through DevOps, operations personnel will assist in the creation of and ongoing change to the delivery pipeline to support the value needed by the IT offering. If your operations personnel are not object orientated programmers well versed in polymorphism, domain-driven design, functional programming and other product development skills; you will have a challenging time getting them to adopt and contribute to the pipeline if those are the skills needed to work with it.

I recommend considering the selection of automation technologies that use scripting languages such as Windows PowerShell, bash, or ruby to automate. These languages have a lower barrier to entry, can execute without needing to be compiled first, and don’t require purchase of an expensive Integrated Development Environment (IDE) to modify. A good list of powerful tools that are built on these languages to start investigating might be Puppet, Chef, Capistrano, Rake, Powerdelivery, and PSake. I can’t honestly recommend MSBuild, Maven, or Ant for the pipeline (though they are fine for compiling things) because XML tends to be too fragile and difficult to comprehend for building even the simplest of conditional logic.

#4 – Are deployment assets versioned along with deliverables?

Because the deployment pipeline will be undergoing change, it will contain configuration and logic meant to deploy the assets of a release at a given moment in time. To enable rollback (I’ll talk more about this later) and reduce surprises, it is preferable to find a way to version the pipeline’s actions along with your deliverables. This is fairly easy to do with most of the technologies I cited above, but there are some other tools on the market that don’t lend themselves well to this important tenet.

#5 – Is it possible to write rollback procedures should aspects of the deployment fail?

Automating the “backing out” of changes made during a deployment, also known as a rollback, is an important step in gaining the confidence necessary to release to customers more frequently. This is the process of returning a deployment to its prior state if a failure somewhere along the way occurs. If your rollback process is manual, you introduce the chance of errors and troubleshooting an improperly performed rollback while the production system is down can cause immense levels of stress.

Look for technologies that keep track of actions performed and if there is a failure, either have “hooks” to let you write your own rollback logic or perform the rollback actions (in reverse order of when deployed) automatically. There’s a good chance you will need to create some of the rollback functionality yourself.

#6 – Does it prevent skipping of environments when deploying release candidates?

The reason teams create dedicated development, staging (or user acceptance), and production environments is to isolate changes being made from their customers and staff performing acceptance testing. One of the highest costs in deployment that kills a team’s ability to release more frequently is when the configuration and deployment of assets to these environments is manual. This is because it is simply too easy for a person to forget to make a change consistently in all environments.

Look for a technology that keeps track of which releases have been deployed to which environments, and will prevent a build that has not been tested from being pushed into production, for example. This is one of the reasons I created powerdelivery, because there isn’t a tool in the Microsoft ecosystem for putting quality in gates into automated deployments between environments, and the Windows Workflow Foundation technology upon which automated builds in TFS were created is too unwieldy and slow to keep up with the needs of a team that is frequently releasing.

#7 – Is it possible to use security to control who can initiate deployments into each environment?

Most teams put a policy in place that determines who is allowed to deploy releases into each environment. This is still a good practice, and should be agreed upon and communicated when planning to begin continuously delivering any IT offering. To help meet Sarbanes-Oxley (SOX) regulatory compliance and prevent mistakes, it is often not enough. Look for technologies that allow configuration of who can promote a build from development to test (often a QA resource) and from there into production (often an operations resource).

#8 – Does the technology provide access to compilers, deployers, and test frameworks you use?

You may have read some great story about how a darling start-up in silicon valley implemented Continuous Delivery using javascript, but if you can’t use it to compile, deploy, and test YOUR code you will end up fighting it. Look for technologies that include support out of the box for the tools and frameworks you use, or have a low barrier of entry to integrating them in.

#9 – Does the technology provide access to infrastructure automation for the on premise or cloud platforms you use?

Continuous Delivery is more than automated deployment – you also need to be able to spin up servers or virtual machines on premise and in the cloud depending on your requirements. If you are using Amazon Web Services, Windows Azure, Heroku, or any of the other popular hosting platforms; make sure your pipeline technology can utilize the best APIs from these vendors. If your technology allows you to deploy a website or mobile application but not harness the rich features of your hosting platform for example, you might be missing out on the ability to automate your entire infrastructure and benefit from things like automatic scaling of your application based on demand.

#10 – Is it possible to replay activities consistently across multiple nodes?

As more IT offerings are able to scale out their processing and storage across multiple physical or virtual computing nodes, deployment technologies that enable Continuous Delivery need to have first class support for parallel activities. If your ASP.NET MVC or Ruby on Rails website will be hosted on a cluster of nodes, look for a technology that can accept the DNS names or I.P. addresses of these nodes and perform deployments seamlessly across them, rolling back the nodes that were deployed to if one fails. If your projects are automatically compiled every time a developer checks in code but the deployment process requires repeating manual steps across multiple nodes – you are in for a long road of troubleshooting configuration mismatches and surely will have a way to go before you can reduce your release cycle time to be more agile in your market.

Hopefully this list of considerations helps your team think objectively about some of the hype out there around deployment pipeline technologies, and you can better take into account some of the additional costs of development and maintenance necessary to get them to work for your particular IT offerings.

Have you read the Continuous Delivery book or are you starting your journey towards reduced cycle times? Let me know some of the struggles and successes you’ve had around deployment pipeline selection and implementation below!

Category:
cloud, configuration, continuous delivery, deployment, environment, process improvement, technologies
Tags:
, , , , , ,

Join the conversation! 5 Comments

  1. […] still have plenty of room to innovate in their visualization of the state of nodes in a delivery pipeline, rendering of status in their own style and format for the devices they wish their users to […]

    Reply
  2. […] didn’t meet their needs. If you’ve invested in releasing in small batches and using a deployment pipeline, the delivery process is optimized for getting feedback. If you stop there however, you’ve […]

    Reply
  3. […] didn’t meet their needs. If you’ve invested in releasing in small batches and using a deployment pipeline, the delivery process is optimized for getting feedback. If you stop there however, you’ve […]

    Reply
  4. […] Any architect who has worked with a team to deliver a product or service will have stories to tell about what makes a great process for delivering software, and what doesn’t. When talking to a potential Architect, I look for how they used technology to prevent low quality deliverables from getting further down the release pipeline. […]

    Reply
  5. […] still have plenty of room to innovate in their visualization of the state of nodes in a delivery pipeline, rendering of status in their own style and format for the devices they wish their users to […]

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: