Composing away from friction

 

The reason it burns

Separation of concerns is key to flexible development and adding new features without friction. If you need to modify an existing piece of code away from it’s initial and current intention it’s probably time to rethink your design. I recently came across this when putting together NBehaves VS2010 plugin, initially the gherkin parsers sole responsibility was to supply classifications for the syntax highlighting. However as we progressed it was evident it was going to need to handle other language features such as intellisense and glyphs.

It looked something like this:

public class GherkinParser : IListener, IClassifier
{
    private IList<ClassificationSpan> _classifications;

    [Import]
    private IClassificationRegistry ClassificationRegistry { get; set; }
    
    public void Initialise(ITextBuffer buffer){
	buffer.Events += Parse(buffer.GetText());
    }

    private void Parse(string text)
    {
	try
	{
	    var languageService = new LanguageService();
	    ILexer lexer = languageService.GetLexer(text), this);
	    lexer.Scan(new StringReader(text)));
        }
        catch (LexingException) 
	{ 
	    /* Ignore mid typing parsing errors until we provide red line support */
	}
    }    

    public void Feature(Token keyword, Token name)
    {
	// Some complex processing on keyword and name ommitted for clarity.
        AddClassification(keyword, name, ClassificationRegistry.Feature);
    }

    public void Scenario(Token keyword, Token name)
    {
	// Some complex processing on keyword and name ommitted for clarity.
        AddClassification(keyword, name, ClassificationRegistry.Scenario);
    }

    private void AddClassification(Token keyword, Token name, IClassificationType classificationType)
    {
        // Some complex processing of text positions ommitted for clarity.
        _classifications.Add(keyword.ToClassificationSpan(classificationType));
    }

    public IList<ClassificationSpan> GetClassificationSpans(SnapshotSpan span)
    {
        return _classifications;
    }
}

This code is grossly simplified but it gets the idea that the parsers sole reason for being is keyword classifications. To add new features which depend on parsing but aren’t related to syntax highlighting we need to edit the parser, and that violates SRP. To make this code more flexible and extensible we need to do some work:

  1. Make the parsers sole responsibily to handle the buffers events.
  2. Format the events in a way easily consumable by future features.
  3. Publish when its parsing or idle.

So let’s tackle these one by one and then move onto how we are going to consume this new format and make new features.

Dousing the fire

Effectively what I see this particular class doing, is consuming the lexers messages, and republishing them in a more consumable way for this particular application. The reactive extensions were built for this type of scenario, so let’s begin by consuming the buffers events:

IObservable<IEvent<TextContentChangedEventArgs>> fromEvent =
    Observable.FromEvent<TextContentChangedEventArgs>(
        handler => textBuffer.Changed += handler,
        handler => textBuffer.Changed -= handler);

_inputListener = fromEvent
    .Sample(TimeSpan.FromSeconds(1))
    .Select(event1 => event1.EventArgs.After)
    .Subscribe(Parse);

In a single line, we reduce the amount of messages produced by the user typing fast in visual studio, select the part of the message we need (text after the users change), and subscribe to the new feed. So we are now consuming the events we need to publish them…

private Subject<ParserEvent> _parserEvents;

public IObservable<ParserEvent> ParserEvents
{
    get { return _parserEvents; }
}

 

This makes it easy for anyone features who need to consume data from the parser to pick up the events. ParserEvent is a simple DTO with the message specific data inside. Pushing data to the subscribers is now as simple as:

public void Scenario(Token keyword, Token name)
{
    _parserEvents.OnNext(new ParserEvent(ParserEventType.Scenario)
    {
        Keyword = keyword.Content,
        Title = name.Content,
        Line = keyword.Position.Line,
        Snapshot = _snapshot
    });
}

Great, this has nothing to do with classifications or syntax highlighting, this parser is fairly generic and hopefully we won’t need to make any major changes to it for a while. To satisfy the last point of letting subscribers know when we are parsing, we simply create a new subject and push to it when we are working:

public IObservable<bool> IsParsing
{
    get { return _isParsing; }
}

private void Parse(ITextSnapshot snapshot)
{
    _isParsing.OnNext(true);
    _snapshot = snapshot;

    try
    {
        var languageService = new LanguageService();
        ILexer lexer = languageService.GetLexer(snapshot.GetText(), this);
        lexer.Scan(new StringReader(snapshot.GetText()));
    }
    catch (LexingException) { }
    finally
    {
        _isParsing.OnNext(false);
    }
}

 

Now that the separation is complete, we can take a look at moving the classifications to a new model.

Putting the house back together

Having one large single class performing all the classifications was a little too cumbersome. And also I wanted to add classifications incrementally with minimal disruption, so I decided to use composition to facilitate the separation and aggregation of these component parts. Each part of the language has it’s own classifier, and we use MEF to pull in the available classifiers and delegate the processing to them.

[ImportMany]
public IEnumerable<IGherkinClassifier> Classifiers { get; set; }

 

_listeners.Add(_parser
    .ParserEvents
    .Select(parserEvent => Classifiers
        .With(list => list.FirstOrDefault(classifier => classifier.CanClassify(parserEvent)))
        .Return(gherkinClassifier => gherkinClassifier.Classify(parserEvent), _noClassificationsFound))
    .Subscribe((spans => _spans.AddRange(spans))));

 

The ImportMany attribute allows us to bring in a collection of classifiers in our assembly or wherever we told the container to look for possible exports. Then we subscribe to the parsers observable stream of parser events and pass each event to the classifiers.

Now each classifier handles a single classification, and its obvious to new developers what each file does.

[Export(typeof(IGherkinClassifier))]
public class FeatureClassifier : GherkinClassifierBase
{
    public override bool CanClassify(ParserEvent parserEvent)
    {
        return parserEvent.EventType == ParserEventType.Feature;
    }

    public override void RegisterClassificationDefinitions()
    {
        Register(parserEvent => GetKeywordSpan(parserEvent));
        Register(parserEvent => GetTitleSpan(parserEvent, ClassificationRegistry.FeatureTitle));
        Register(parserEvent => GetDescriptionSpan(parserEvent.Line, parserEvent.Description, parserEvent.Snapshot));   
    }

The solution structure also now reveals a lot about the functionality on offer and where things should sit.

image

So when it comes to adding new visual studio language features, we should hopefully have a much easier and friction-free time.

Introducing the NBehave text based scenario VS2010 runner

 

I’ve been using NBehave for a long time now on various projects, usually using it’s fluent syntax. I wanted to move to using text based scenarios but unfortunately unless you like dropping to the command line to run your tests you were out of luck.

Now there is the wonderful looking SpecFlow, and I would be doing a disservice to the community if I didn’t mention it. It’s a very complete and increasingly popular framework for running gherkin compatible text based scenarios. However it accomplishes its task in a very different way to NBehave, and the differences are worth looking at in a separate post.

Initially John and myself took a look at making a ReSharper plug-in, however its API was driven towards running code not text and getting things working was not immediately obvious. So instead we decided on testing out the visual studio API which we had far more success with.

So let’s take a look at the new VS2010 runner and how it can make life a little easier. We will need to get the installer from the build server as the plug-in is quite new and has not made an official release yet. I would recommend getting the latest executable artefact from the following link:

http://teamcity.codebetter.com/project.html?projectId=project30

image

Now we can run the installer and ensure the plug-in is ticked, this will deploy the VSIX package into the appropriate location.

image

Assuming you have a project already using text based scenarios, or have follow Johns blog post then we should be ready to go. I already have a solution ready so let’s take a look:

image

We can see the plug-in is loaded and ready to go, I already have a feature file and it’s associated step file set up:

image

If we right click on the feature file then we will find some new context menu options:

image

Picking Run Scenario will run the scenarios in a separate process and published the results in the output window:

image

The debug option does as you would expect, and both options currently run all the scenarios in the selected file. This is the first version and is quite light on features, however there are some planned:

  1. Run a single scenario.
  2. Run from solution explorer.
  3. Syntax highlighting and completion for gherkin files.
  4. Full featured results window instead of output window.
  5. Go to step definition from scenario.
  6. Keyboard shortcuts.

I hope you find it useful, if you have any suggestions or bugs you can get me on twitter at @naeemkhedarun or on CodePlex at http://nbehave.codeplex.com

Happy coding!

NBehave: Some C# file templates

Here are a couple of templates that I use for getting started writing a new test fixture for C#-based NBehave tests.  Once you drop these into your templates folder (typically under My Documents in “Visual Studio 2010\Templates\ItemTemplates\Visual C#") then you will get the following when you add a new item:

nbehave templates

NBehaveMSTests.zip (1,021.00 bytes)

NBehaveNUnit.zip (992.00 bytes)

July 12 2010

NBehave, Dates and value conversions

Some of the guys at my work are starting to get into NBehave in a big way and today asked me an interesting question around how NBehave captures parameter values out of scenarios.  Consider these methods:

[Given("a user has logged on as $username")}
public void SetupUser(string username) { }

[When("the user asks for all blog posts since $fromdate")
public void LoadBlogPostsSince(DateTime fromdate) { }

[Then("they should see $num blog posts")]
public void CheckNumberOfBlogPosts(int num) { }

[BTW please refer to my previous post for an explanation of how methods like this can be invoked by NBehave, if you aren’t familiar with it]

Now you’ll notice that each of these methods capture a parameter value from the scenario.  So if we were to invoke these methods against the following:

Given a user has logged on as Bob
When the user asks for all blog posts since 24/05/2010
Then they should see 15 blog posts

Then the captured parameter values (on my system!) would be “Bob”, “24-May-2010” and “15”.  This is all quite simple, except for the fact that our method parameters are strongly typed and hence some conversion is required.  My colleague was specifically asking about the handling of dates and how NBehave parses it.

NBehave actually calls System.Convert.ChangeType to perform the conversion.  This is a pretty general function which relies heavily on the IConvertible interface to do the hard work.  Curiously, I didn’t actually know that this method existed until I looked into the NBehave code!  But anyway, this method relies on the current thread culture for date parsing.  And this means that the above scenario will not run for a thread running under en-US (since “24/05/2010” is not a valid date in that culture) but will run and yield the expected results on a thread running under en-GB.

June 17 2010
Newer Posts Older Posts