I’m taking a break from my posts on continuous delivery to talk about a related trend that continues unabated. Ask yourself, have you ever heard any of these phrases uttered in your delivery process (or perhaps said them yourself)?

  • “That algorithm is simple and doesn’t require testing.”
  • “Only a few of our customers use that alternative configuration, it should work.”
  • “I’ve used this pattern hundreds of times. It will work fine for this application.”
  • “There’s enough information in the requirements for any developer to implement this. Don’t schedule any time to do more analysis.”
  • “This component is too simple to need a code review.”
  • “We only found a few bugs in regression last time. We can’t afford a full test cycle. Get it out there”.”
  • “They are late on their deliverable, but it will only take a couple hours to integrate once we get it.”
  • “If that happens it’s a catastrophic event. There’s no way to test for it, just disclaim our liability in the license agreement.”
  • “Production isn’t very different from staging. We’ll just fix it in production if we find something that breaks.”
  • “Our customers aren’t going to use that case much anyway. Just focus on the happy path.”
  • “Our customers use all of these features. If we don’t include them all in the new version, our customers won’t use the product.”
  • “Can you believe this single request performance? We will scale to thousands of requests!”.

These statements are sadly made all the time. They are a symptom of the pressure teams feel when the software delivery process is thrown together, and is not gated with automated metrics or process steps that can’t be skipped. It’s easy for someone to declare “all code will be 80% tested before we ship”, but if you don’t have an automated process for making sure that’s enforced, it can be circumvented as release time nears.

There’s no way to realistically automate every decision (and I loathe the day where someone tries to sell me on a technology for that one), but when we predict the future part of taking that risk is measuring it, and looking for trends. This is business intelligence at its core, applied to your development process.

To validate predictions, it’s necessary to capture metadata related to every task assigned to members of your team (that includes you, management) as well as keep a log of predictions and outcomes. This log should include predictions that were made, by whom, and what process or design decisions were made on the basis of the description. It should also include a description of a metric that can be used to judge whether other events occurred as a result of the prediction in some measurable fashion.

As an example, if a prediction is made that one feature is more important to users than another, measure the usage in your software and calculate the results for analysis at regular intervals over the lifetime of the product. This is harder to do with shrink-wrapped products, but it can be done. Microsoft and Google will often allow you to “opt in” to providing additional feedback about their products.

If a prediction is made that a certain feature is stable enough to not warrant exhaustive testing, add that prediction to the log and measure the time spent fixing bugs related to it. You may find yourself surprised at the cumulative waste caused by a bad design or process decision where the impact comes as small individual costs instead of one big noticeable bang. The old saying “death by 1000 cuts” applies here.

The goal of capturing these predictions is twofold. First, it’s important that if a prediction turns out to be false, we know about it. When predictions result in actions, and the results of those predictions are never validated, we lose valuable insight into our ability to consistently make good decisions, and may keep making the same bad ones. Agile teams regularly review team members’ ability to hit estimates to help them estimate better. Shouldn’t the same be the case for predictions that effect our delivery process?

The second goal of capturing these predictions is to open our eyes to the available data for insight into the delivery process that we may not even know is there. Once you start tracking the outcomes of your predictions, you start looking at information to validate the results that may have never been investigated before, and this gives you more tools to use to make future predictions. There’s a ton of data out there in your requirements tracking, defect tracking, source control, and operations monitoring tools being captured but its value has not been fully leveraged. If you’ve got the data, it just makes sense to use it.

Category:
continuous delivery, process improvement
Tags:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: