How would you like that developed, sir? Tactically or strategically?

Here’s a number of scenarios which I’ve seen played out repeatedly in different settings.  They all have a common root – see if you can spot it:

Business user: “Ever since you developers adopted agile you’re releases have been buggy!  Agile is rubbish!” Product Owner: “Great!  You’ve shown me exactly what I want!  Let’s launch tomorrow!”

Developer: “Oh no … it will take a least a month to get this ready for launch”
Product Owner: “That POC is spot on! Let’s start developing the next feature.”

Developer: “But it’s a POC … there’s a bunch of stuff we need to do to turn it into production-ready code.”
Project Manager: “The velocity of the team is far too low.  We should cut some of the useless unit testing stuff that you guys do.”

 

So what’s the common link?  Right … quality!  Or more specifically, different views around levels of quality.

Now in agile terms, quality is best represented in the definition of done.  This definition should codify exactly what you mean when you say “this story is done”, or “how long until it is done?”.  Scrum itself doesn’t provide any specific guidance around the definition of done, but says that it’s important the team agree this.

It’s important to note that the definition of done should not be specific to a given story.  So my story about bank transfers may have a number of acceptance criteria around how to debit and credit various accounts, but even if my code works I might not be done because I haven’t had my peer review yet.

With that all said, here is what I see is going wrong in the above scenarios:

The team have set their definition of done below the business user’s expectations (which are probably unstated) The team have set their definition of done below the product owner’s expectations - the product owner is expecting to include all release tasks The product owner doesn’t appreciate that there is a difference between POC code and code suitable for a long-term solution The project manager either doesn’t appreciate the benefits of unit tests, or thinks that the team have set their definition of done too high.

 

There are numerous good discussions and articles on the web about a definition of done (StackOverflow question, another question, an article, and another, and a HanselMinutes podcast), but I’d like to propose the idea that we should have some overall quality levels.  For instance, it doesn’t make sense to develop a strategic, long-term solution in the same way as a prototype.  So here’s what I propose as some overall quality levels:

  • Spike – Written to prove one single technical question.  Should never be extended unless that technical question needs to be explored further.
  • Prototype – Written as a quick-and-dirty demonstration of some functionality.  Can be used for on-going demonstrations, and so may need to be extended, but should never be deployed into production.
  • Tactical – Written to fulfil a specific, limited business requirement.  Life expectancy should be in the order of 2-3 years, after which it ought to be decommissioned and replaced.
  • Strategic – Written in response to on-going and continued business requirements.  Will be expected to evolve over time to meet changing business needs and emerging technologies.

And in terms of what I think these mean for a definition of done, here is my strawman (additional steps will most likely apply for a specific project, depending on the nature of the project):

Quality levels

So the next time you start a project, make up your own grid like this (or use this list, I don’t mind) and use it to have a detailed discussion with your product owner and scrum master.  It may surprise them to find that you are thinking about these issues, and their position on quality may well surprise you too!

blog comments powered by Disqus