Over the last year and a half of effectively working with 1 week iterations, two aspects of the traditional method of running an Agile project have not been required for us. The first is the burndown chart, and the second the Definition of Done. I used to be a real bearer on using both these tools, but 1 week iterations has changed my mind about them. Now I prefer quality as tasks.
With 1 week iterations, we have a cycle time of 3 – 4 days.* This isn’t great if you consider that there are only 5 working days in a 1 week iteration. We haven’t really been able to figure out how to get this cycle time down even though our mean and mode story size is 2. I have a feeling that, perhaps, that is the most efficient cycle time for the team in our organisation if we take into account our “organisational lag” (i.e. dependencies on outside teams, previous architectural decisions on legacy systems we have to interface with, and plain old lack of knowledge about legacy systems because those people have left). What we do know is that we have to complete the stories within one week. During the iteration, the team has an excellent feel for how they’re doing and what will and won’t be completed about two or three days in. As a result, we don’t use the burndown chart.
Another aspect which is interesting is that we don’t need the Definition of Done. This is a direct consequence of not business accepting a story until the service is in production. To ensure we don’t deploy something that is going to fall over, we have a story template that we apply to all new features, bug fixes and modifications so that we ensure we cover all the bases. The template story has the following tasks (remember, we practice BDD):
- Write failing BDD test
- Code review
- Fixes from code review
- Add blackbox test OR verify current blackbox tests still work
- Quality Engineer – verify
- Merge and mint
- Deploy and verify in staging
- Deploy and verify in production
- Enable statistics collection
- Create statistics graphs
- Update Helpdesk Monitoring Handbook
We take the above template tasks and modify as appropriate. For example, there may be no stats to be collected or graphed because it’s a modification to a feature that already has its statistics being collected. In this case, the statistics tasks will be removed.
An item in the template which bears mentioning is the Code Review task. Since the inception of our project, we have used the team email to do code review for a couple of reasons. The main reason is full transparency to all team members so we can each learn from each other. We tried various code review tools but decided to not use them for two reasons:
- It was difficult to see the context of the solution because a developer, or even the tool set up, meant that only changed classes were included in the review….so we ended going back to our IDEs anyway.
- Only the developer submitting the code could see the result of the review.
When performing the code review, we have another template:
- Design – focus on the design of the solution
- Application code
- Unit test code
- BDD tests
- Sonar metrics
The above techniques mean that we have a structured manner in which to ensure that code that gets released to production does not get rolled back. Indeed, in the last year with an average deployment rate of 2 – 3 deployments per day (manually – we’re working on automating this), we have only had a single figure number of rollbacks. Pretty good, I think!
It would be great to know what Agile tools or techniques you have either modified or removed due to project and/or application context, so do feel free to leave a comment, below.
* Cycle time is the amount of time that passes between a story being put in progress and it being business accepted.