How would you like that developed, sir? Tactically or strategically?

Here’s a number of scenarios which I’ve seen played out repeatedly in different settings.  They all have a common root – see if you can spot it:

Business user: “Ever since you developers adopted agile you’re releases have been buggy!  Agile is rubbish!” Product Owner: “Great!  You’ve shown me exactly what I want!  Let’s launch tomorrow!”

Developer: “Oh no … it will take a least a month to get this ready for launch”
Product Owner: “That POC is spot on! Let’s start developing the next feature.”

Developer: “But it’s a POC … there’s a bunch of stuff we need to do to turn it into production-ready code.”
Project Manager: “The velocity of the team is far too low.  We should cut some of the useless unit testing stuff that you guys do.”

 

So what’s the common link?  Right … quality!  Or more specifically, different views around levels of quality.

Now in agile terms, quality is best represented in the definition of done.  This definition should codify exactly what you mean when you say “this story is done”, or “how long until it is done?”.  Scrum itself doesn’t provide any specific guidance around the definition of done, but says that it’s important the team agree this.

It’s important to note that the definition of done should not be specific to a given story.  So my story about bank transfers may have a number of acceptance criteria around how to debit and credit various accounts, but even if my code works I might not be done because I haven’t had my peer review yet.

With that all said, here is what I see is going wrong in the above scenarios:

The team have set their definition of done below the business user’s expectations (which are probably unstated) The team have set their definition of done below the product owner’s expectations - the product owner is expecting to include all release tasks The product owner doesn’t appreciate that there is a difference between POC code and code suitable for a long-term solution The project manager either doesn’t appreciate the benefits of unit tests, or thinks that the team have set their definition of done too high.

 

There are numerous good discussions and articles on the web about a definition of done (StackOverflow question, another question, an article, and another, and a HanselMinutes podcast), but I’d like to propose the idea that we should have some overall quality levels.  For instance, it doesn’t make sense to develop a strategic, long-term solution in the same way as a prototype.  So here’s what I propose as some overall quality levels:

  • Spike – Written to prove one single technical question.  Should never be extended unless that technical question needs to be explored further.
  • Prototype – Written as a quick-and-dirty demonstration of some functionality.  Can be used for on-going demonstrations, and so may need to be extended, but should never be deployed into production.
  • Tactical – Written to fulfil a specific, limited business requirement.  Life expectancy should be in the order of 2-3 years, after which it ought to be decommissioned and replaced.
  • Strategic – Written in response to on-going and continued business requirements.  Will be expected to evolve over time to meet changing business needs and emerging technologies.

And in terms of what I think these mean for a definition of done, here is my strawman (additional steps will most likely apply for a specific project, depending on the nature of the project):

Quality levels

So the next time you start a project, make up your own grid like this (or use this list, I don’t mind) and use it to have a detailed discussion with your product owner and scrum master.  It may surprise them to find that you are thinking about these issues, and their position on quality may well surprise you too!

July 12 2011

Lost in the Delta Quadrant!

 

"A word to the wise is infuriating." 

Hunter S Thompson 

OK, for the three of you that read this I have been absent for a while due to never ending project pressures and futile attempts to buy a house at the moment (All things I had put off till after Agile 2009)

This post is (unfortunately) a bit on the short side, and really just an outline of some of the things I have in mind to write about in the coming months, as much to jog my memory as to, hopefully, keep you all interested,

Agile and UAT - Where this can come unstuck, how we can prevent it from doing just that

Behavior Driven Development and how we started implementing it (Given my project breaks, And I don't want it to, When I have better tests and they run, Then I am a happy bunny)

Stubborn QA Departments What do you do when you have a QA department that refuses to give you testers until the release phase of your project?

Where is the risk? Why is it when we contract agile we seem to expect our clients to take all the risk?

A bit of a brain dump on what I'm thinking at the moment. H opefully its enough to whet your appetite. among those topics I will probably be discussing some of the training material we generated while researching our talk for Agile 2009. Maybe I'll drop a bit of insight to what we are proposing for Agile 2010?

Or maybe I won't let that cat out of the bag - You'll have to wait and see.

Mal

 

by Malcolm
March 13 2010

Agile 2009 - We entered the water and got out alive

 

"It was the Law of the Sea, they said. Civilization ends at the waterline. Beyond that, we all enter the food chain, and not always right at the top."

Hunter S Thompson - The Great Shark Hunt 

Agile 2009 - Simon Bennett and I presented this talk and I have to say, it went down quite well. It could have gone horribly wrong after all.

When we proposed this session we had quite a few comments (All visible in the above link I think) that "the hate" we were talking about didn’t exist. So when we presented and asked the room to vote, Green if you didn’t feel the hate red if you did ....

 We got a LOT of red.

Which is just as well! If it had been all green we would have been letting people out early J

So - We see the hate, other people see it, Why is there so little information on it? The point of our talk was to investigate the reasons for this hate, see if there were clear reasons for it and see what the overall drivers were. This I think we achieved and I'll post some of the feedback here soon. Suffice to say what we got was quite positive.

We did get a few people turn up for a different session however - I believe they were looking for practical tools to make their testers (Or themselves) more agile. They wanted more direction which we didn’t give them but then that wasn’t the point of the presentation. And as Alistair Cockburn would say (Did say!) "That is a Shu question with a Ri answer"

I think it is probably a good topic for a whole other workshop (That I might consider for next year but hey, you will have to wait till next year to see if I do it)

The main thing I think we got from it is the environment your team and company creates for your testers has as much, if not more, impact on their ability to be agile than they themselves do. Treating people in a certain way in my experience will cause them to react in that manner.

Take teenagers, Where a shop decrees that all teenagers are shoplifters and a security guard must follow them around the teenagers will (In my opinion justifiably) think "Well - If you are going to treat me like a criminal anyway regardless if I steal or not then I may as well steal - You will treat me the same anyway"

If you don’t treat them all as criminals you will still get the odd thief (But you would have got that anyway) now though you won’t be actively encouraging the rest of them to behave that way

Your testers are no different! If you treat them as the last line of defence (Identified by the sentence "Why didn’t testing find this bug") they will take on a gatekeeper mentality

If you treat them as the sole people responsible for quality (Identified by the sentence "I’m finished it just needs to be tested now") then they will sit at the end and wait for you to throw the code over to them.

I could go on but basically I am saying until the environment permits your testers to be agile they simply can’t be, and many places won’t change the environment until the tester is agile.

So which really does come first? The chicken or the egg?

At the risk of sounding all Kanban and eastern mysticism ;-) you need to change both together, if that proves impossible change the environment first! You may have some short term testing difficulties while your testers attempt to maintain their to their old methods in an environment that  is hostile to those behaviours but then again. You cant drive a car on water, and it’s quite difficult to be non agile in an environment that is hostile to those behaviours.

I'm not saying this will work for everyone but I suspect that, as in nature, if the environment is hostile to behaviours you want to minimise then those behaviours should die out by a process of natural selection.

 

by Malcolm
March 13 2010
Older Posts