How to focus on the wrong thing when writing code

A recent discussion on our mailing list revolved around the use of regions, and whether stripping them out of a codebase was a useful first task when joining a new project!  The conclusion was, sensibly, that there were more likely bigger fish to fry (especially since region outlining can easily be disabled in your IDE) and so regions should be left alone.  Along the way, however, someone voiced an opinion that the software devs have a professional responsibility to adhere to existing standards.

Consistency in code is a great aid to readability – no doubt about it – but often someone needs to be the first to write a unit test, for example.  The benefit that comes about from automated testing massively outweighs any benefit from uniform code, by many orders of magnitude.  Similar arguments apply to use of IOC, good design of code, proper consideration of class / method / variable names, refactoring, SOLID principles and so on.  All of these things are simply more important than whether you group all your methods into a region named “Methods” or not.  And unfortunately, none of these items can be adequately codified into standards – it’s simply not possible.

Curiously, coding standards are so mechanical that they can actually be captured by software and fixes can be automatically applied.  These are the only coding standards that I personally ever bother to try and apply on a project.  I want development tools to be worrying about capitalisation, the presence or absence of underscores at the start of variable names, positioning of braces, and so on.  I do not want developers to concern themselves with such issues – the time and brain power of a good developer is simply too valuable to waste on such niceties.

Adherence to coding standards is just like tidying the decks of a ship.  It’s a good thing to do … just make sure that you aren’t on a Titanic that is slowly sinking into the deep!

How would you like that developed, sir? Tactically or strategically?

Here’s a number of scenarios which I’ve seen played out repeatedly in different settings.  They all have a common root – see if you can spot it:

Business user: “Ever since you developers adopted agile you’re releases have been buggy!  Agile is rubbish!” Product Owner: “Great!  You’ve shown me exactly what I want!  Let’s launch tomorrow!”

Developer: “Oh no … it will take a least a month to get this ready for launch”
Product Owner: “That POC is spot on! Let’s start developing the next feature.”

Developer: “But it’s a POC … there’s a bunch of stuff we need to do to turn it into production-ready code.”
Project Manager: “The velocity of the team is far too low.  We should cut some of the useless unit testing stuff that you guys do.”

 

So what’s the common link?  Right … quality!  Or more specifically, different views around levels of quality.

Now in agile terms, quality is best represented in the definition of done.  This definition should codify exactly what you mean when you say “this story is done”, or “how long until it is done?”.  Scrum itself doesn’t provide any specific guidance around the definition of done, but says that it’s important the team agree this.

It’s important to note that the definition of done should not be specific to a given story.  So my story about bank transfers may have a number of acceptance criteria around how to debit and credit various accounts, but even if my code works I might not be done because I haven’t had my peer review yet.

With that all said, here is what I see is going wrong in the above scenarios:

The team have set their definition of done below the business user’s expectations (which are probably unstated) The team have set their definition of done below the product owner’s expectations - the product owner is expecting to include all release tasks The product owner doesn’t appreciate that there is a difference between POC code and code suitable for a long-term solution The project manager either doesn’t appreciate the benefits of unit tests, or thinks that the team have set their definition of done too high.

 

There are numerous good discussions and articles on the web about a definition of done (StackOverflow question, another question, an article, and another, and a HanselMinutes podcast), but I’d like to propose the idea that we should have some overall quality levels.  For instance, it doesn’t make sense to develop a strategic, long-term solution in the same way as a prototype.  So here’s what I propose as some overall quality levels:

  • Spike – Written to prove one single technical question.  Should never be extended unless that technical question needs to be explored further.
  • Prototype – Written as a quick-and-dirty demonstration of some functionality.  Can be used for on-going demonstrations, and so may need to be extended, but should never be deployed into production.
  • Tactical – Written to fulfil a specific, limited business requirement.  Life expectancy should be in the order of 2-3 years, after which it ought to be decommissioned and replaced.
  • Strategic – Written in response to on-going and continued business requirements.  Will be expected to evolve over time to meet changing business needs and emerging technologies.

And in terms of what I think these mean for a definition of done, here is my strawman (additional steps will most likely apply for a specific project, depending on the nature of the project):

Quality levels

So the next time you start a project, make up your own grid like this (or use this list, I don’t mind) and use it to have a detailed discussion with your product owner and scrum master.  It may surprise them to find that you are thinking about these issues, and their position on quality may well surprise you too!

AutoMockingContainer: Now on Silverlight 4.0

The #Fellows AutoMockingContainer (AMC) which I recently introduced, is now available on Silverlight 4.0.  There have been no code changes, but the the assemblies have been compiled against the SL 4 assemblies and Rhino Mocks 3.5 for Silverlight.  Get the binaries from the link below and source code from the Bitbucket repo.

SharpFellows.AutoMockingContainer.Silverlight.dll (13.50 kb)

A new AutoMockingContainer (which can work with MEF)

About auto-mocking

The Problem of Dependencies

When working in an Inversion of Control (IOC) container, best practice dictates that you let the IOC do as much work for you as possible and inject your dependencies.  As a rule of thumb, this typically means that classes contain less code and conform better to the Single Responsibility Principle, but they do usually end up with a greater number of dependencies. 

In addition to this. IOC-style coding often uses constructor injection for dependencies which are critical to the functioning of the class and property injection (aka setter injection) for other dependencies (see Martin Fowler for explanation of these terms).

The net result of all this, is that classes often end up with quite a few constructor parameters.  And then when you come to write some unit tests for your classes, you end up having to create and manage a large number of mock objects, even though not all of those may be relevant to your test.

The Solution: Automatically Creating Mocks

And one day some developer decided that this drudge work of creating and managing mock objects could be automated.  In fact, the guys at Eleutian were the first ones to come up with an Auto-Mocking Container (AMC) on the .Net platform … they did it by adding a facility to Castle Windsor.  Since then, AMCs have been added to StructureMap and, more recently, James Broome has produced an AMC for use with Machine.Specifications.  These are great libraries and work well.

And do we need another AMC?

With the notable exception of James’s library, you typically need to take a dependency on an IOC and spin it up in your unit tests.  This is not too much of a problem if your IOC has some support for an AMC.  On my project we are using MEF as an IOC and there isn’t (before now) an AMC that works with that.

The reason I didn’t use James Broome’s library is that Machine.Specifications (aka MSpec) is not really my cup of tea.  I was imprinted with a different BDD framework, NBehave.  James’s library is quite tightly coupled to the MSpec way of working.

Introducing the SharpFellows.AutoMockingContainer

It’s probably best to introduce the #Fellows AMC through some code:

public class BlogViewModel
{
    private IAuthorRepository _authors;
    private ISpamScoringService _spamScoring;

    public BlogViewModel(IAuthorRepository authors, ISpamScoringService spamScoring)
    {
        _authors = authors;
        _spamScoring = spamScoring;
    }

    public void RecordComment(string author, string comment)
    {
        // Do something interesting here
    }
}

[TestClass]
public class BlogViewModelTests
{
    [TestMethod]
    public void TestMethod1()
    {
        // ARRANGE
        var container = new ObjectFactory();
        var viewModel = container.CreateObject<BlogViewModel>();
        var authorName = "author.name";
        var comment = "great stuff";

        // ACT
        viewModel.RecordComment(authorName, comment);

        // ASSERT
        container.GetDependency<IAuthorRepository>()
                 .AssertWasCalled(repo => repo.FindByName(authorName));
        container.GetDependency<ISpamScoringService>()
                 .AssertWasCalled(scoring => scoring.MeasureSpamScore(authorName, comment));
    }
}

As you can see, ObjectFactory is where the good stuff happens.

MEF Support

The #Fellows AMC provides support for MEF, but it is not a required dependency.  The project files do have a reference to System.ComponentModel.Composition (the MEF assembly) but unless you actually invoke the MEF Dependency Locator through policy, it will never be required at run-time.

And here is a MEF-based view-model with a test using the #Fellows AMC:

public class MefBlogViewModel
{
    private IAuthorRepository _authors;

    [ImportingConstructor]
    public MefBlogViewModel(IAuthorRepository authors)
    {
        _authors = authors;
    }

    [Import]
    public ISpamScoringService SpamScoring { get; set; }

    public void RecordComment(string author, string comment)
    {
        // Do something interesting here
    }
}

[TestClass]
public class MefBlogViewModelTests
{
    [TestMethod]
    public void TestMethod1()
    {
        // ARRANGE
        var container = new ObjectFactory();
        container.Policy.Set(new MefDependencyLocator());
        var viewModel = container.CreateObject<BlogViewModel>();
        var authorName = "author.name";
        var comment = "great stuff";

        // ACT
        viewModel.RecordComment(authorName, comment);

        // ASSERT
        container.GetDependency<IAuthorRepository>()
                 .AssertWasCalled(repo => repo.FindByName(authorName));
        container.GetDependency<ISpamScoringService>()
                 .AssertWasCalled(scoring => scoring.MeasureSpamScore(authorName, comment));
    }
}

The #Fellows AMC will inject against an [ImportingConstructor] (or will invoke a default parameter-less constructor) and will then inject against properties attributed with [Import].  Constructor parameters or properties attributed [ImportMany] are not currently supported for auto mock injection.

Now it takes quite a sharp eye to spot the difference in the test.  It is a single line of code that sets up MEF as part of the policy:

container.Policy.Set(new MefDependencyLocator());

What is Policy and how is it applied?

AutoMocking Policy is really what gives the #Fellows AMC its flexibility and is what sets it apart from the other AutoMocking Containers which are out there.  Put simply, policy consists of three areas:

AMC.Policy

Each of these areas be configured on an individual container (as per the above), or it can be configured at a static level where it will affect all containers in the AppDomain that are subsequently created.

ObjectFactory.DefaultPolicy.Set(new MefDependencyLocator());

Policy components that are currently available in #Fellows AMC are as follows:

Dependency Locators
  • Reflection-based (using the greediest constructor)     [default]
  • MEF-based
Lifecycle Controllers
  • Shared dependencies [default]
  • Non-shared dependencies
Mock Generators
  • Rhino Mocks

 

The three areas of policy work together to give the #Fellows AMC its overall behaviour.

Policy is an area that I think can be developed a lot further, both in terms of adding new policy components as well as allowing policy to be specified at a more granular level.  For instance:

  • specifying a mock generator that will instantiate object for classes in a given namespace
  • specifying that some classes should not have auto-mocking applied at all
  • specifying that certain dependencies should be shared whilst others should not

Furthermore, I suspect it would extremely useful to allow policy to be read from the config file.  This would allow us to centrally specify policy for all our tests without needing some code to adjust static state on the ObjectFactory.

Roadmap for the #Fellows AutoMockingContainer

The first thing I want to do is to get Silverlight support.  Or more specifically, an assembly which is compiled for Silverlight and against the relevant Silverlight assemblies (e.g. RhinoMocks SL version), since that is all that’s needed.  And after that, I’m hoping to extend the policy mechanism significantly.

Where can I get the #Fellows AutoMockingContainer?

Assemblies for .Net 4.0 are attached to this blog post.  If you are interested in the source code, then you can download it from the BitBucket repository.

SharpFellows.AutoMockingContainer.dll (13.00 kb)

Older Posts