Debugging design time data with MefedMVVM

 

Sometimes design time data just won’t show, even though everything works at runtime. This is quite normal since Expression Blend, and the Visual Studio designer for that matter don’t instantiate our views and ViewModels in the context of an application.

MefedMVVM has a concept of design time data using the IDesignTimeAware interface, so we can provide an implementation of our ViewModel purely for use by the designers.

In my current case, despite having all my bindings correct, and things working at runtime, Blend refuses to show me the design time ListBoxItems…

image

So like all things code-FAIL related, we debug, simply set a break point on the DesignTimeInitialization method and attach to Expression Blend.

image

You will need to close and re-open the Xaml View (or do a build) so Blend re-initialises our ViewModel… and viola! We now know what the problem is.

image

In my case it was simply a service which was missing at design time. Allowing defaults allows the design time composition to succeed since we don’t actually need it until runtime.

image

image

Much better!

Visual Studio Extensibility and MefedMVVM

 

Initially the project started out as a quick hack together to enable a single piece of functionality. However as functionality is growing its time for the codebase to grow up too, and this includes the UI elements. I’ve decided to use MefedMVVM for its design time blend features and easy to grok codebase.

If you try and use it out of the box with your MEF Component and debug, you might get the following error:

System.ComponentModel.Composition.ImportCardinalityMismatchException occurred
  Message=No valid exports were found that match the constraint '(((exportDefinition.ContractName == "RunOrDebugViewModel") etc…

Doh! Taking a look at the default runtime IComposer (what tells the container where to look for your assemblies) we can understand a little more of why it can’t find our assemblies.

private AggregateCatalog GetCatalog()
{
    var catalog = new AggregateCatalog();
    var baseDirectory = AppDomain.CurrentDomain.BaseDirectory;
    var extensionPath = String.Format(@"{0}\Extensions\", baseDirectory);
    catalog.Catalogs.Add(new DirectoryCatalog(baseDirectory));
    catalog.Catalogs.Add(new DirectoryCatalog(baseDirectory, "*.exe"));
    if (Directory.Exists(extensionPath))
        catalog.Catalogs.Add(new DirectoryCatalog(extensionPath));
    return catalog;

}

It’s basing the location from the current AppDomain, which makes perfect sense in a normal application, and infact should work with a deployed VSIX.

However when debugging a VSIX it resolves to:

"C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Extensions\"

Odd, looking there my assemblies aren’t deployed there at all, debugging to find out where they actually are I can see:

“C:/Users/naeem.khedarun/AppData/Local/Microsoft/VisualStudio/10.0Exp/Extensions/Naeem Khedarun/NBehave/0.5.0.0/NBehave.VS2010.Plugin.Editor.dll”

I see, it’s actually deployed into the experimental instance of visual studio. Luckily for us, MefedMVVM does have a mechanism for us to override the default discovery logic.

So the first thing to do, is at whichever the first visual studio provider initialises first, we need to configure the MefedMVVM bootstrapper:

LocatorBootstrapper.ApplyComposer(new VisualStudioRuntimeComposer());

 

This will override the default runtime composer with our own, so lets implement the IComposer interface and tell it to look in the appropriate place…

public class VisualStudioRuntimeComposer : IComposer
{
    public ComposablePartCatalog InitializeContainer()
    {
        return GetCatalog();
    }

    public IEnumerable<ExportProvider> GetCustomExportProviders()
    {
        return null;
    }

    private AggregateCatalog GetCatalog()
    {
        var location = (from assembly in AppDomain.CurrentDomain.GetAssemblies()
                             where assembly == typeof (ServiceRegistrar).Assembly
                             select assembly.Location).First();

        var directory = Path.GetDirectoryName(location);

        var catalog = new AggregateCatalog();
        catalog.Catalogs.Add(new DirectoryCatalog(directory));

        return catalog;
    }
}

 

We look in the current AppDomain as before, but this time look for our loaded assembly (one should be loaded or the bootstrapper wouldn’t have been called) and grabs its directory.

We can now resolve ViewModels in the experimental instance! I haven’t yet tried the new Composer with the normal instance, however if there are any issues I’ll make a follow up post.

Composing away from friction

 

The reason it burns

Separation of concerns is key to flexible development and adding new features without friction. If you need to modify an existing piece of code away from it’s initial and current intention it’s probably time to rethink your design. I recently came across this when putting together NBehaves VS2010 plugin, initially the gherkin parsers sole responsibility was to supply classifications for the syntax highlighting. However as we progressed it was evident it was going to need to handle other language features such as intellisense and glyphs.

It looked something like this:

public class GherkinParser : IListener, IClassifier
{
    private IList<ClassificationSpan> _classifications;

    [Import]
    private IClassificationRegistry ClassificationRegistry { get; set; }
    
    public void Initialise(ITextBuffer buffer){
	buffer.Events += Parse(buffer.GetText());
    }

    private void Parse(string text)
    {
	try
	{
	    var languageService = new LanguageService();
	    ILexer lexer = languageService.GetLexer(text), this);
	    lexer.Scan(new StringReader(text)));
        }
        catch (LexingException) 
	{ 
	    /* Ignore mid typing parsing errors until we provide red line support */
	}
    }    

    public void Feature(Token keyword, Token name)
    {
	// Some complex processing on keyword and name ommitted for clarity.
        AddClassification(keyword, name, ClassificationRegistry.Feature);
    }

    public void Scenario(Token keyword, Token name)
    {
	// Some complex processing on keyword and name ommitted for clarity.
        AddClassification(keyword, name, ClassificationRegistry.Scenario);
    }

    private void AddClassification(Token keyword, Token name, IClassificationType classificationType)
    {
        // Some complex processing of text positions ommitted for clarity.
        _classifications.Add(keyword.ToClassificationSpan(classificationType));
    }

    public IList<ClassificationSpan> GetClassificationSpans(SnapshotSpan span)
    {
        return _classifications;
    }
}

This code is grossly simplified but it gets the idea that the parsers sole reason for being is keyword classifications. To add new features which depend on parsing but aren’t related to syntax highlighting we need to edit the parser, and that violates SRP. To make this code more flexible and extensible we need to do some work:

  1. Make the parsers sole responsibily to handle the buffers events.
  2. Format the events in a way easily consumable by future features.
  3. Publish when its parsing or idle.

So let’s tackle these one by one and then move onto how we are going to consume this new format and make new features.

Dousing the fire

Effectively what I see this particular class doing, is consuming the lexers messages, and republishing them in a more consumable way for this particular application. The reactive extensions were built for this type of scenario, so let’s begin by consuming the buffers events:

IObservable<IEvent<TextContentChangedEventArgs>> fromEvent =
    Observable.FromEvent<TextContentChangedEventArgs>(
        handler => textBuffer.Changed += handler,
        handler => textBuffer.Changed -= handler);

_inputListener = fromEvent
    .Sample(TimeSpan.FromSeconds(1))
    .Select(event1 => event1.EventArgs.After)
    .Subscribe(Parse);

In a single line, we reduce the amount of messages produced by the user typing fast in visual studio, select the part of the message we need (text after the users change), and subscribe to the new feed. So we are now consuming the events we need to publish them…

private Subject<ParserEvent> _parserEvents;

public IObservable<ParserEvent> ParserEvents
{
    get { return _parserEvents; }
}

 

This makes it easy for anyone features who need to consume data from the parser to pick up the events. ParserEvent is a simple DTO with the message specific data inside. Pushing data to the subscribers is now as simple as:

public void Scenario(Token keyword, Token name)
{
    _parserEvents.OnNext(new ParserEvent(ParserEventType.Scenario)
    {
        Keyword = keyword.Content,
        Title = name.Content,
        Line = keyword.Position.Line,
        Snapshot = _snapshot
    });
}

Great, this has nothing to do with classifications or syntax highlighting, this parser is fairly generic and hopefully we won’t need to make any major changes to it for a while. To satisfy the last point of letting subscribers know when we are parsing, we simply create a new subject and push to it when we are working:

public IObservable<bool> IsParsing
{
    get { return _isParsing; }
}

private void Parse(ITextSnapshot snapshot)
{
    _isParsing.OnNext(true);
    _snapshot = snapshot;

    try
    {
        var languageService = new LanguageService();
        ILexer lexer = languageService.GetLexer(snapshot.GetText(), this);
        lexer.Scan(new StringReader(snapshot.GetText()));
    }
    catch (LexingException) { }
    finally
    {
        _isParsing.OnNext(false);
    }
}

 

Now that the separation is complete, we can take a look at moving the classifications to a new model.

Putting the house back together

Having one large single class performing all the classifications was a little too cumbersome. And also I wanted to add classifications incrementally with minimal disruption, so I decided to use composition to facilitate the separation and aggregation of these component parts. Each part of the language has it’s own classifier, and we use MEF to pull in the available classifiers and delegate the processing to them.

[ImportMany]
public IEnumerable<IGherkinClassifier> Classifiers { get; set; }

 

_listeners.Add(_parser
    .ParserEvents
    .Select(parserEvent => Classifiers
        .With(list => list.FirstOrDefault(classifier => classifier.CanClassify(parserEvent)))
        .Return(gherkinClassifier => gherkinClassifier.Classify(parserEvent), _noClassificationsFound))
    .Subscribe((spans => _spans.AddRange(spans))));

 

The ImportMany attribute allows us to bring in a collection of classifiers in our assembly or wherever we told the container to look for possible exports. Then we subscribe to the parsers observable stream of parser events and pass each event to the classifiers.

Now each classifier handles a single classification, and its obvious to new developers what each file does.

[Export(typeof(IGherkinClassifier))]
public class FeatureClassifier : GherkinClassifierBase
{
    public override bool CanClassify(ParserEvent parserEvent)
    {
        return parserEvent.EventType == ParserEventType.Feature;
    }

    public override void RegisterClassificationDefinitions()
    {
        Register(parserEvent => GetKeywordSpan(parserEvent));
        Register(parserEvent => GetTitleSpan(parserEvent, ClassificationRegistry.FeatureTitle));
        Register(parserEvent => GetDescriptionSpan(parserEvent.Line, parserEvent.Description, parserEvent.Snapshot));   
    }

The solution structure also now reveals a lot about the functionality on offer and where things should sit.

image

So when it comes to adding new visual studio language features, we should hopefully have a much easier and friction-free time.

Older Posts