Most project teams have tried some permutation of an agile or SCRUM process by now, and a consistent theme amongst those I see on consulting engagements is a failure to deliver the work done in a sprint to users before starting the next one. Continuous integration, standup meetings, and backlogs are usually present, and some will even try test-driven development. But at the end of the sprint, the work is still not ready to deliver to users.
At the end of a sprint, there is a meeting that takes place where stakeholders and the development team get together to review the work that was done. More often than not, the stakeholders like what they see to some extent, but find discrepancies between what they thought they were getting and what was actually implemented. In every case the reason this occurs is a failure to establish acceptance criteria prior to doing the work.
An agile or SCRUM process is usually sold to the business as a way to get more communication going between the development team, and an ability to shift priorities since the backlog can be prioritized; and the work assigned to development in the next sprint allows for more flexibility than a waterfall approach with typical several month to year cycles for releases. Additionally, higher quality is usually promised to the business.
Inexperienced agile teams may read extreme programming and story-driven approaches and like the fact that requirements are sold as only needing to be short statements and not detailed use cases as happens on a waterfall project. They often take this to the extreme, in that a simple description of the work is established in the backlog, and this is the only agreement that can be pointed to with certainty at any point in the process.
A user story in an agile or SCRUM development team’s backlog should be a promise for a future conversation. When a story is scheduled into the next sprint and that sprint starts, the first activity that should take place is the business stakeholder most intimate with the story has a conversation with the developer who is going to do the work, and a QA person who is responsible for validating acceptance of the work is also present.
During this conversation, the goal is to establish acceptance criteria. This focus on criteria provides several benefits. First, it allows the business more time to provide details about the story that they may not have originally communicated when it was placed on the backlog. Second, it allows the developer a chance to communicate technical challenges with the business’ vision, and gives the two parties a chance to come to a compromise in design that will sufficiently meet the needs of both. Thirdly, it enables QA to think about possible ways in which it may be tested, which often leads the business and development representatives to further clarity. The last, which is the subject of this post, is to establish what constitutes successful completion of the work.
To do so does not require a use case, or a large document. Rather, the group should be able to walk away with a description in English of ways in which the user or system will interact with the software that, if successful, validate that the work was done correctly. The level of detail you go into with your description is up to your team, but the more detailed, the more sure you can be that what will be completed at the end of the sprint will be ready for delivery to users. If you are building a calculator for example, you may establish several mathematic calculations that have to succeed to consider it acceptable. Every possible calculation does not need to be present here; but rather enough that it would be difficult to meet the acceptance criteria and still deliver a low quality feature.
Once this acceptance criteria is established, it is of great importance that the person responsible for user acceptance, typically a QA representative, works with the developer to write automated acceptance tests (highly preferred) or come up with a manual testing process that can be used to verify it. This work can be done before the code is written (test-first acceptance) or during (parallel acceptance) but do not leave this until the end. It is highly important that developers are able to execute the acceptance tests, whether automated or manually, several times during the sprint to gauge their progress towards completing it.
Because this acceptance criteria is established up front, it helps developers to focus on delivering precisely that functionality and also reduces the chatter that often happen in lieu of this as a developer attempts to get clarification from the business about details that were not there at the beginning. That being said, if the team is inexperienced with defining acceptance, the first few sprints may result in two undesirable side effects.
The first of these is that since developers are not used to having to deliver acceptance tests along with the work itself, there is a good chance that too much work will be scheduled in the sprint, and some of them may be late. It is of great importance that the entire team – the business, QA, and development accept this possibility and use it as a learning experience to discover what a reasonable amount of work to deliver in a sprint looks like when it has to be accepted and deliverable before the sprint is over. The next sprint will likely deliver less functionality in the same amount of time, but will be done in time for the end of the sprint. This is the difference between what I’ve heard others call “agilefall” or “waterscrum”. That being the cherry-picking of practices from an agile/SCRUM process and failing to deliver on the promises.
The second side effect of this process change that may be felt when first implementing it on your project is that there will still be some things missing from what was delivered and what the business expects. Let me be clear here – it is perfectly normal, and actually a great benefit to using an agile process that the business can see something every 2 weeks (or however long your sprint is) and upon doing so, provide additional detail and changes that can be scheduled for the next sprint. However the entire delivery team needs to get better at articulating what they do plan to deliver in a way that can be acceptance tested and is clear to the developer so what is agreed upon is not open to interpretation.
This subtle difference is important – it is unrealistic and illogical for the business to attempt to hold developers accountable for not delivering functionality at the end of a sprint for which acceptance criteria could not be defined. If the business wants developers to do a better job at delivering what they want, they must improve their ability to articulate it, or simply embrace the great flexibility that comes with an agile process to allow them to figure out more about what exactly they want every two weeks.
One more change needs to occur to your process to allow for the work that is done in the sprint, now backed by acceptance criteria, to be delivered to users at the end. The developer should allow for time to meet with operations personnel or whoever maintains the various environments (development, acceptance, and production for example) to ensure that they can actually deliver it to users at conclusion of the end-of-sprint meeting. The business may still decide that it is not ready for users from a functional standpoint, but the goal should be for the functionality delivered in each sprint to be of high enough quality to deliver to users immediately following conclusion of the sprint should they desire. Refer to my post about the dangers of making production an island and not optimizing your build process for quick escalation from a user acceptance environment into production for more information on how your operations team can deliver deployment scripts along with the development team.
Think for a moment about the net result of all of this. At the end of a sprint, QA can demonstrate that the functionality delivered meets the acceptance criteria not that they or a developer came up with, but the business as well. We’ve all seen the project where QA said a feature was tested but the business is upset with them and the developers for what was delivered. Developers also do not have to feel any anxiety that what they deliver will not be acceptable. They should however be comfortable with the fact that upon seeing the work, the business may want things changed or additional functionality put in. This is exactly why businesses usually agree to trying out an agile/SCRUM process in the first place.
Even though less functionality is delivered at the end of a sprint than your team may be used to due to having to include time for defining acceptance criteria, building automated or manual acceptance processes, and getting that functionality deployed into a user acceptance environment – the net result is that your business truly will be able to deliver new functionality to users every two weeks. This outcome alone is more important than any of the individual agile or SCRUM processes – that of continuously delivering real value to your users.