Integration Testing WCF Services with TransactionScope

Integration testing WCF services can be right pain and I experienced it firsthand on my current project. If you imagine a hypothetical person repository service exposing 4 CRUD operations, it would make sense to test all four of them. If the operations were to be tested in random order, it would be perfectly feasible that testing update after a delete may fail if the object to be updated has been deleted by a previous test. The same obviously applies to reads following updates etc. In other words tests are dependent on each other and this dependency is really evil for a number of reasons: first of all it usually means that you cannot run tests in isolation as they may depend on modifications made by other tests. Secondly, one failing test may cause a number of failures in the tests which follow and thirdly, by the end of the test run underlying database is in a right mess as it contains modified data, forcing you to redeploy it should you wish to re-run the tests. Considering the fact that running integration tests is usually time consuming exercise, this vicious circle of deploy/run/fix becomes extremely expensive as the project goes on.

TransactionScope to the rescue

Fortunately for us WCF supports distributed transactions and if there is one place where they make perfect sense it is in the integration testing. Imagine a test class written along the following lines:

image

The idea behind it is that whenever a test starts, a new transaction gets initiated. When the test completes, regardless of its outcome, the changes are rolled back leaving the underlying database in pristine condition. This means that we can break dependency between tests, run them in any order and rerun the whole lot without the need for redeploying the database. Holy grail of integration testing :) To make it work however, the service needs to support distributed transactions (which is usually a not a bad idea anyhow). Having said that  have to be aware of various and potentially serious gotchas which I will cover later.

To make a service "transaction aware" following changes have to be made (I assume default, out of the box WCF project): first of all the service has to expose an endpoint which uses a binding which in turn supports distributed transactions (e.g. WsHttpBinding) and the binding has to be configured to allow transaction flow. This configuration has to be applied on both client (unit test project) and the server side:

image

Secondly, all operations which are supposed to participate in transaction have to be marked as such in the service contract:

image

The TransactionFlowOptions enumeration includes NotAllowed, Allowed and Required flags which I hope are self explanatory. Using Allowed flag is usually the safest bet as the operation will allow callers to call the service with or without transaction scope. Making the service transaction aware as illustrated above is usually enough to make this whole idea work.

The third change which is optional but I highly recommend it, is to decorate all methods which accept inbound transactions with [OperationBehaviorAttribute(TransactionScopeRequired=true, TransactionAutoComplete = true)]. By doing so we state that regardless of the client "flowing" transaction or not, the method will execute within transaction scope. If the scope is not provided by the client, WCF will simply create one for us which means that code remains identical regardless of the client side transaction being provided. The TransactionAutoComplete option means that unless the method throws an exception, the transaction will commit. This also means that we do not have to worry about making calls to BeginTransaction/Commit/Rolback anymore. The default for TransactionAutoComplete is true so strictly speaking it is not necessary to set it but I did it here for illustration purposes.

image

The attached sample solution contains a working example of person repository and may be useful to get you started.

The small print

Important feature of WCF is the default isolation level for distributed transactions which is Serializable. This means that more often than not, your service is likely to suffer badly from scalability problems should the isolation level remain set to the default value. Luckily for us WCF allows us to adjust it; the service implementation has to simply specify required level using ServiceBehaviorAttribute. Unless you know exactly what you are doing I would strongly recommend setting the isolation level to ReadCommitted. This is the default isolation level in most SQL Server implementations and it also gives you some interesting options.

image

Having done this the caller has to explicitly specify its required isolation level as well when constructing transaction scope.

image

An interesting "feature" of using transaction scope, in testing in particular, is the fact that your test may deadlock on itself if not all operations being executed within the transaction scope participate in it. The main reason for which this may happen is lack of TransactionFlowAttribute decorating the operation in service contract. In the test below if the GetPerson operation was not supporting transactions, yet the DeletePerson was, then an attempt to read the value deleted by another transaction would cause a deadlock. Feel free to modify the code and try it for yourself. 

image 

Distributed transactions will require MSDTC running on all machines participating in the transaction i.e. the client, the WCF server and the database server. This is usually the first stumbling block as MSDTC may be disabled or may be configured in a way which prevents it from accepting distributed transactions. To configure MSDTC you will have to use "Administrative Tools\Component services" applet from the control panel. MSDTC configuration is hidden in the context menu of "My Computer\Properties". Once you activate this option you will have to navigate to MSDTC tab and make sure that security settings allow "Network access" as well as  "Inbound/Outbound transactions".

image

Performance

One issue which people usually raise with regards to distributed transactions is performance: these concerns are absolutely valid and have to be given some serious consideration. The first problem is the fact that if the service has to involve transaction managers (MSDTC) in order to get the job done it usually means some overhead. Luckily, the transaction initiated in TransactionScope does not always need to use MSDTC. Microsoft provides Local Transaction Manager which will be used by default as long as the transaction meets some specific criteria: transactions involving just one database will remain local incurring almost no overhead (~1% according to my load tests). As soon as your transaction involves other resources (databases or WCF services) it will be promoted to distributed and will get a performance hit (in my test case it is 25% decrease in performance but your mileage may vary). To check if a method executes within local or distributed transaction you may inspect Transaction.Current.TransactionInfo.DistributedIdentifier: value equal to Guid.Empty means that transaction is local. The second issue affecting performance is the fact that transactions will usually take longer to commit/rollback meaning that database locks will be held for longer. In case of WCF services the commit will happen when the results have been serialized back to the client which can introduce serious scalability issues due to locking. This problem can be usually alleviated by using ReadCommitted isolation level and row versioning in the database.

Parting shots

The project I am currently working on contains some 2500 integration tests, 600 of which test our WCF repository. In order to make sure that every test obeys the same rules with regard to transactions we have a unit test in place which inspects all test classes in the project and makes sure all of them derive from the common base class which is responsible for setting up and cleaning the transaction. I would strongly recommend to follow this approach in any non trivial project as otherwise you may end up with some misbehaving tests breaking the whole concept.

Happy testing!

November 16 2008

Using WCF in the MessengerService

Recently I posted an article and some code for a service which logs onto MSN Messenger and exposes some web services for sending messages.  I promised in that article that if I got time to refactor the code to use WCF I would post it - here it is and the code is quite a lot simpler.  The revised code is attached to this blog post, you can click here to download the code and binaries.

Removing the Web Services Code

There was previously some complexity in hosting the web services and ensuring that they were functioning correctly.  This complexity manifested itself in the form of a background Thread to service requests, a ManualResetEvent for synchronisation, two AppDomains at runtime and a few helper classes.  Since we are no longer using web services, all of this can be deleted.

Creating a Service Contract

Often some thought is need around the exact operations to be exposed and their various signatures.  However, since I wasn't aiming to change functionality at all, I didn't see any point in revisiting the operation signatures we had previously.  So all that was involved in creating the service contract was setting up an IMessengerService interface and decorating it with the appropriate attributes:

[ServiceContract]
public interface IMessengerService
{
    [
OperationContract]
    bool QueueMessage(string[] recipients, string message);

    [
OperationContract]
    bool QueueMessageToOnePerson(string recipient, string message);

    [
OperationContract]
    bool QueueMessageToOnePersonWithValidity(string recipient, string message, int validityInMinutes);

    [
OperationContract]
    bool QueueMessageWithValidity(string[] recipients, string message, int validityInMinutes);
}

Hosting the WCF Service

The old web service methods were thin wrappers around calls to QueueManager methods.  These wrapper methods now live in the QueueManager class which implements IMessengerService.  So all that's needed to expose this singleton instance over WCF is the following code:

_host = new ServiceHost(QueueManager.Instance);
_host.Open();

and an attribute on the QueueManager class as follows:

[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]
public class QueueManager : IMessengerService
{
    // Details of the class omitted here
}

Reworking the MSBuild Client

Strangely enough, while the move to WCF was a simplification of the service code there is now more code in the MSBuild client.  This is due to one particular quirk of code running as an MSBuild task - there is no config file.  WCF is (rightly so) tailored to expressing service endpoints in config files.

As a result of this, the MsnNotification task has to programmatically construct the full endpoint (including binding, service behaviours, etc) based on parameters passed in from the build script.  To avoid excessive complexity, the code deduces the transport protocol from the service URL and then uses the default binding.

Binding binding = null;
EndpointAddress address = new EndpointAddress(_url);
switch (address.Uri.Scheme)
{
    case "http":
    case "https":
        binding =
new BasicHttpBinding();
        break;
    case "net.tcp":
        binding =
new NetTcpBinding();
        break;
    case "net.msmq":
        binding =
new NetMsmqBinding();
        break;
    case "net.pipe":
        binding =
new NetNamedPipeBinding();
        break;
    case "net.p2p":
        binding =
new NetPeerTcpBinding();
        break;
    default:
        Log.LogError(
"Unable to deduce correct binding from URL. Supported schemes are http, https, net.tcp, net.msmq, net.pipe and net.p2p");
        return false;
}

ChannelFactory
<MessengerSvc.IMessengerServiceChannel> factory = new ChannelFactory<MessengerSvc.IMessengerServiceChannel>(binding, address);

MessengerSvc.
IMessengerServiceChannel channel = factory.CreateChannel();
channel.Open();

// Use the service proxy as before

Benefits of Using WCF

So aside from the use of technology, have we gained anything from using WCF instead of the web services?  Sure we have.  Here's the list as I see it:

  • Reduced code complexity - a lot of the complex stuff around the hosting of the web services is now taken care of by the System.ServiceModel.ServiceHost class.
  • Better scalability - the sharp-eyed reader will notice that previously we only had a single thread to process incoming requests.  The ServiceHost class does a better job of servicing clients.
  • A wider range of protocols - we are no longer limited to plain SOAP messages.  We can declaratively add in things like security, transactions, reliable messaging, etc, etc (although the MSBuild task doesn't support these right now).  Additionally, we can expose the service over MSMQ, TCP sockets, etc, etc.

UPDATE

Howard took one look at my code and immediately started refactoring it.  The updated version is attached to this post.  The main change he made was to improve the access to the configuration file - creating a class that derives from ConfigurationSection is much more of a .Net 2.0 way of doing things.

November 7 2006

WCF: Sending Collections Over the Wire

Recently I needed to send a custom collection class as a response to a WCF call.  I duly added the normal [DataContract] attribute and was rewarded with the following WCF error:

Type 'MyCollection' is an invalid collection type since it has DataContractAttribute attribute.

Hang on, you may think, why do we needed custom collection classes given all the great generic collections in .Net 2.0, e.g. List<T> and so on?  You have a point, but for reasons of their own some of my team members decided to create a class such as the following:

public class MyCollection : List<MyObject>

It turns out that [DataContract] is not the correct attribute to be using in this case.  WCF also provides a [CollectionDataContract] attribute.  So the following code works just fine:

[CollectionDataContract]
public class MyCollection : List<MyObject>

October 24 2006
Newer Posts Older Posts