Humblecoder

Caution, this blog may be ironically named

Working With Named Branches in Mercurial

| Comments

I’m a recent convert to the ways of the DVCS and my chosen flavour is Mercurial.  There is tons of documentation out there but I couldn’t find a “Quick Start for Half Wits to Branching” that appealed to my simple nature. So I thought I’d have a go at writing one based around my current workflow.

Mercurial has several ways to work with branches but I find named branches best suit my feature branch approach to using SCM.  There is an excellent guide to all the types with their advantages and disadvantages, here.

I’m going to assume you’re familiar with basic push, pull, and commits.

#

Creating and Moving Between Named Branches

To create a new named branch you simply enter:

hg branch "<branch name>"

This will mark the current working revision with the branch name you specified but it will not be part of your repository till you next do a commit.  When committing it will create the branch and add all the changes since your last commit.  This is quite a big gotcha, if you want to any changes to be committed on the old branch before moving to the new one you must do a commit before branching.

Finding out the name of the branch you are on is be done by entering:

hg branch

You can list all of the branches that are currently open by using:

hg branches

This outputs a list of branches with the changeset number that is the head of the branch, for example:

hgbranches2The ‘default’  branch is setup when the repository is first created and is the equivalent of a mainline or trunk branch.

To move between the branches you do an update with the name of the branch in. This will fail if you have any uncommitted changes but you can force it to update and lose any the changes by using –C.

This will just move between branches:

hg up "<branch name>"

This moves between branches losing any changes:

hg up -C "<branch name>"

The following examples ties a few of these together and shows how creating a branch without committing first moves any recent changes into the new branch.

[![sample1](http://www.humblecoder.co.uk.gridhosted.co.uk/wp-content/uploa

ds/2010/02/sample1_thumb.jpg)](http://www.humblecoder.co.uk.gridhosted.co.uk /wp-content/uploads/2010/02/sample1.jpg)****

Merging

Merging two branches is done by updating to the branch you want to be the target then merge in the changes from the source branch by specifying its name. For example if you wanted to merge the latest edits from default into the feature branch named “Feature X” you would do:

hg up "Feature X"
hg merge “default” 

It will attempt to merge the changes without any user intervention but if it can’t it will ask the user to do it.  On Windows, KDiff3 is installed along with Mercuiral and is used to merge changes.  Again none of these changes will be saved until you commit.  If you regularly work on a feature branch while others update the default branch, I would strongly urge you regularly merge updates from the default branch to stop you having one almighty nightmare merge at the end.

Merging offers a preview option by passing –p to the merge command.  This allows you to see upfront if there will be any conflicts or problems, I find this quite useful when assessing how long a merge will take. 

__

Pushing

Mercurial by default pushes your whole repository, including all your local branches, to the remote repository.  It works exactly the same if the branch (or branches) exists in the remote repository.  If they don’t it will fail and you can use the –f option when pushing to force it to create the branches on the remote repository.

But sometimes you want to just push changes on the current branch, leaving the others behind.  I’ve recently discovered a great tip for doing this, you can just push the current revision with its associated history and parent by using:

hg  push --rev .

The ‘.’ is a shortcut for the most recent changeset on the branch and it will only push changes on that branch.  This, of course, means that the branch must exist or be created on the parent.

#

Closing

Mercurial doesn’t support the deleting of branches (at least that I’ve come across) but you can close a branch.  Closing a branch is done by selecting the branch and doing a commit with the close flag, for example:

hg up "<branch to close>"
hg commit --close-branch

This marks the branch as inactive and stops it appearing when you list the branches using hg branches.  It is possible to still see all branches including closed ones using the this command

hg branches -c

One major downside is, closing a branch does not prevent it from being pushed. This means you will still have to create it on the server or just push the current changeset as shown above.

Introducing OCInject

| Comments

Update I’ve created a CodePlex project: http://ocinject.codeplex.com

I’ve been learning about Dependency Injection (DI) and Inversion of Control, one of the ways I’ve done this is by creating my own mini DI framework for use in DirLinker.  I’ve, also, looked at the big name frameworks and read lots along the way but I want to give something back to the community that has taught me so much.  So I’m releasing my tiny DI framework used in DirLinker independently as OCInject.

OK, So Why Is This Different To X

My aim when creating OCInject was to create something that can be used in small projects, single exe utilities or one off apps, where you want the advantages of a DI container but without the overhead of Castle, Autofac, et al.  So consider this DI lite, you simply take the two files, IClassFactory.cs and ClassFactory.cs, and drop them in to your project. That’s it, done, no external dependency and no XML config file. What do you get?

Features:

  • A DI Container with fluent like configuration
  • Ability to resolve constructor parameters for registered types and passed in constructor arguments
  • Runtime delegate factory generation
  • Pseudo session support via IDisposable.

The feature list is tiny and is very unlikely to grow; well maybe by one, singleton support.  It is not meant to compete with or to replace anything that already exists.

How To Use It

Basic Resolution

As I said above first thing to do is copy IClassFactory.cs and ClassFactory.cs into your project, then fill the container and resolve your type.  For example

    public interface IMyClass
    {    }  
    public class MyClass : IMyClass
    {  }  
    public class MyApp
    {
        public void Run()
        {
            IClassFactory factory = new ClassFactory();  
            factory.RegisterType<IMyClass, MyClass>();  
            IMyClass myClass = factory.ManufactureType<IMyClass>();
        }
    }

If the class has dependencies we can inject these just by specifying them as constructor arguments on our class and the container will resolve them if they are registered.  For example

public interface IMyClassWithDepend
{}  
public class MyClassWithDepend
{
    public MyClassWithDepend(IMyClass depend)
    { }
}  
public class MyApp
{
    public void Run()
    {
        IClassFactory factory = new ClassFactory();  
        factory.RegisterType<IMyClassWithDepend, MyClassWithDepend>();
        factory.RegisterType<IMyClass, MyClass>();  
        IMyClassWithDepend myClass = factory.ManufactureType<IMyClassWithDepend>();
    }
}

Auto Delegate Factories

For auto generated delegate factories we need to create a delegate that returns the contract type. Then register this with the container using the WithFactory method when registering the type. For example:

public delegate IMyClassCreatedByFactory FactoryMethodName();  
public interface IMyClassCreatedByFactory
{ }  
public class MyClassCreatedByFactory
{ }  
public class FactoryConsumer : IFactoryConsumer
{
    FactoryMethodName _Factory;
    public FactoryConsumer(FactoryMethodName factory)
    { _Factory = factory; }  
    public void DoWork()
    {
        IMyClassCreatedByFactory c = _Factory();
    }
}  
public class MyApp
{
    public void Run()
    {
        IClassFactory factory = new ClassFactory();  
        factory.RegisterType<IFactoryConsumer, FactoryConsumer>();
        factory.RegisterType<IMyClassCreatedByFactory, MyClassCreatedByFactory>()
            .WithFactory<FactoryMethodName>();  
        IFactoryConsumer myClass = factory.ManufactureType<IFactoryConsumer>();
        myClass.DoWork();
    }
}

Auto Delegate Factories With Parameters

Delegate factories can take parameters and pass them on to the constructors of objects.  This is still a bit limited because it doesn’t intelligently select the correct constructor, just the first it comes across (maybe a feature for the future :P).  To use them just add parameters to your delegate declaration and a create a constructor on your implementation type that matches.  You can still have dependencies that are resolved by the container in the constructor, for example:

public delegate IMyClassCreatedByFactory FactoryMethodName(String param1);  
public interface IMyClassCreatedByFactory
{ }  
public class MyClassCreatedByFactory
{
    public MyClassCreatedByFactory(IMyClass myClass, String param)
    { }
}  
public class FactoryConsumer : IFactoryConsumer
{
    FactoryMethodName _Factory;
    public FactoryConsumer(FactoryMethodName factory)
    { _Factory = factory; }  
    public void DoWork()
    {
        IMyClassCreatedByFactory c = _Factory("OCInject filling a gap that doesn't exist");
    }
}  
public class MyApp
{
    public void Run()
    {
        IClassFactory factory = new ClassFactory();  
        factory.RegisterType<IMyClass, MyClass>();
        factory.RegisterType<IFactoryConsumer, FactoryConsumer>();
        factory.RegisterType<IMyClassCreatedByFactory, MyClassCreatedByFactory>()
            .WithFactory<FactoryMethodName>();  
        IFactoryConsumer myClass = factory.ManufactureType<IFactoryConsumer>();
        myClass.DoWork();
    }
}

Where to Get it From and Further Examples

You can download it from BitBucket at http://bitbucket.org/humblecoder/ocinje ct/.  I don’t have any documentation at the moment but you can look at the DirLinker source and the unit tests for OCInject for more examples.

Enjoy and I hope someone finds it useful :)

The Curious Case of the Broken Bridge

| Comments

For a long time VMWare Workstation has been my weapon of choice for virtualisation.  I’ve used it for everything from a VM to test apps in to setting up a small domain. I’ve always just done the same things to setup a VM: create a machine, install the guest OS and install VMWare tools. VMWare even has a wizard to automate the last two steps.

Much to its credit (or my detriment, depending on your point of view) I’ve never had to dive deeply into any hows or whys, it’s just worked.  Until now.

The Problem

A couple of days ago I started messing with TeamCity and decided to set it up in a VM.  I followed my normal three steps to configure a VM, installed TeamCity and started, happily, CI’ing away.  I suspended the VM and closed VMWare, this proved to be a fatal mistake.

When I resumed the VM, it could no longer connect to the network.  I hadn’t changed any settings and before suspending the VM I could connect to the TeamCity web interface from both my laptop and desktop.  So I started with some basic ‘power cycling’.  I restarted the VM then VMWare with the VM shutdown and, finally, the host.  It didn’t make any difference, so I went on to checking the VM’s OS settings but nothing jumped out as being wrong.

I had deliberately set up the VM to use bridged networking so it could be accessed outside of the host.  Switching the networking type to NAT restored network connectivity to the VM but now it couldn’t be accessed.  But it confirmed the problem was with the host not the VM.

The Solution

Knowing roughly where the problem was I checked what network adapters my machine had installed.  It had four:

adapters

The two VMware connections are for host only and NAT networking, ‘Local Connection’ is my physical adapter and VirtualBox is for exactly what it claims.   VirtualBox, that jogged my memory, the last time that I had used VMWare was before installing VirutalBox to experiment with (Ultimately, I didn’t like it because of the host/guest folder sharing).  At this point I was wondering if VMWare were deliberating punishing me for trying VirtualBox but that seemed a tad implausible.

I then started to wonder how VMWare knew what adapter to bridge with and noticed VMWare Virtual Network Editor.  I fired it up and spotted the problem almost immediately:

vne

The bridge was automatically selecting the network connection and changing the ‘Bridged to:’ option to my network card restored the VM’s network connectivity.   I have no idea why it selected the VirtualBox adapter over the physical one randomly, maybe it just fancied meeting someone new.  But this got me back to my blissful CI’ing.

Having read about this a bit further it seems to be a common problem for it to select wireless network cards and not reselect the wired one when you change. So this is a good first port of call if you lose network in a VM.

I Was Wrong About Delegate Factories “Micro Optimisations”

| Comments

In my previous post I talked about creating typed delegate factories, towards the end of the post I talked about optimising the performance by avoiding boxing when passing value type parameters around.  My premise was that if we could use generics to select strongly typed method signatures for the method that constructs the type we could avoid the boxing and unboxing of value types.  I think outside of dependencies, the most common constructor arguments is going to be value types.  But I was wrong.

Why?

It all boils down to Constructor.Invoke, this is the method my tiny dependency injection framework uses to create new instances of objects from the container.  Its only signature takes params object[] so all our hard work is useless because the values will be boxed for this call.

The biggest thing I was wrong about, though, was that reflection is quicker than boxing.  Lets look at the timing of the reflective generic approach:

reflective

So generating the factory takes 79ms when you reflect over the type and remove any generics.  But what about calling using params object[].

bloxing

This takes 31ms to create the factory.  So it is quite clear that, at least upfront, using the param approach is substantially quicker.  One thing I will point out is, the tests didn’t use the factories to create any types.  But the values will still be boxed when passed to the invoke method so I would still expect the second method to be more performant.

My point is that we need to look more closely at how we are going to use the code and if there is any optimisation, like caching the generated delegates, that would speed up calls after taking an initial hit.  If we didn’t need to make the call Constructor.Invoke would the second approach still be quicker overall in our application?

Creating Typed Delegates at Runtime Using Expression Trees

| Comments

In my previous post I talked about using the abstract factories pattern instead of the service locator pattern. Towards the end of the post I went on to talk about delegate factories and I want to clear up any misunderstanding before going any further.  I think injecting the factory via the constructor is the right way to do this, the delegate factories I talked about are just a simplification and a way to remove the need to write a lot of simple and repetitive code.  I don’t think it helps that I switched example code towards the end. So here is an example using IFile / FileConsumer.

delegate IFile IFileFactoryForFileName(String fileName);  
public class FileConsumer
{
    private IFileFactoryForFileName CreateFileForFileName;  
    public FileConsumer(IFileFactoryForFileName ifileFactory)
    {
        CreateFileFromFileName = ifileFactory;
    }  
    public void DoSomething()
    {
        IFile fileA = CreateFileForFileName("test.txt");  
        //uses fileA
    }
}

In this example we don’t have an interface and implementation for a factory just the delegate declaration, this would live near the IFile interface, and we will trust the container to wire this up for us when it is resolving the constructor arguments.  This leads nicely into the how.

All About Expression

When first looking at the problem of how to generate the delegates I thought easy, anonymous delegates or lambdas and let type inference take care of the rest but the reality is somewhat more complicated.  The best way to achieve this is using Expression Trees.

Expression trees are the foundation of LINQ and a high level abstraction that allow you to treat a tree of objects as code.  This allows us to build an expression at runtime that represents the required factory then compile it to a lambda and finally the delegate we require.

All this starts at the Expression class, it is an abstract class that is used as a base class for Expression objects that represent the various things we can do.   It, also, has static methods that create the relevant expression objects for us.  The ones we are interested in are:

  • ParameterExpression – This is what it sounds like, it allows us to create a parameter that is going to be passed into and used in by the expression.

  • ConstantExpression – Again it’s obvious what this does, it represents a value that will not change for the life time of the expression.

  • NewArrayExpression – This represents the creation of an array and its content within the expression.

  • MethodCallExpression – This allows the expression to call external methods, so far I’ve only been able to get this to work for public statics but this is not a major problem

  • Finally, LambdaExpression – This allows us to pull all of elements together and compile it into a lambda or, as in this case, a delegate of the type supplied, so long as the return type and parameters match.

Now we know what we are going to be use, lets look at how we generate the factory:

public virtual void RegisterDelegateFactoryForType<TResult, TFactoryDelegateType>()
{
    MethodInfo delegateInvoker = typeof(TFactoryDelegateType).GetMethod("Invoke");
    ParameterExpression[] factoryParams = GetParamsAsExpressions(delegateInvoker);  
    //Build the factory from the template
    MethodInfo mi = typeof(ClassFactory).GetMethod("FactoryTemplate");
    mi = mi.MakeGenericMethod(typeof(TResult));  
    Expression call = Expression.Call(mi, new Expression[] {Expression.Constant(this),
        Expression.NewArrayInit(typeof(Object), factoryParams)} );  
    TFactoryDelegateType factory = Expression.Lambda<TFactoryDelegateType>(call, factoryParams).Compile();  
    _typeFactories.Add(typeof(TFactoryDelegateType), factory as Delegate);
}  
private ParameterExpression[] GetParamsAsExpressions(MethodInfo mi)
{
    List<ParameterExpression> paramsAsExpression = new List<ParameterExpression>();  
    Array.ForEach<ParameterInfo>(mi.GetParameters(),
        p => paramsAsExpression.Add(Expression.Parameter(p.ParameterType, p.Name)));  
    return paramsAsExpression.ToArray();
}  
public static T FactoryTemplate<T>(ClassFactory factory, params Object[] args)
{
    return factory.ManufactureType<T>(args);
}

Starting at the top we use the MethodInfo for the Invoke method of the delegate to get the all the parameters for the required delegate.  We then get a MethodInfo for a template method that uses generics, it is important at this point we replace the generic parameters with the concrete types otherwise the compiling of the lambda expression to the delegate will fail.  It then sets up the call to the template factory method, builds the correctly typed delegate and adds it to a collection of prebuilt delegates ready to be passed into constructors.  This expression tree basically builds a lambda expression that looks like the following:

delegate myInterface myInterfaceFactory(String a, String b)  
//transform to  
(String a, String b) => FactoryTemplate(classfactory, a, b);

Memory and Performance Considerations

Before wrapping up, lets just reflect over the performance and memory implications of this.  We are using delegates and statics, a typical recipe for disaster in the managed world but fear not.  The use of the constant expression that relates to the ClassFactory effectively adds a new root to the ClassFactory instance but the class factory object holds the roots to the delegates.  This means before the ClassFactory instance goes out of scope we need to dispose it correctly to ensure we release the roots to the delegates. In reality, this is unlikely to cause us problems as we would normally want the ClassFactory instance to last as long as the application.

What might be a performance problem though is the potential boxing and unboxing of value types that is caused by passing parameters into the FactoryTemplate method as an Object array.  There is no easy answer to this because when we don’t know up front how many parameters we are going to need or what type.  We could, however, make a micro optimisation to avoid it by offering several overloads that take between 0 and 4 parameters before falling back on to a params function with the idea that most people won’t need that many.  I will leave the exercise of converting the expression tree to use the correct generic template factory to the reader :)

Although I haven’t committed the latest changes, I do have this 90% done and I will push to CodePlex soon.  I am going to pull out he DI stuff I’ve been working on and host it at BitBucket.

Detangling Service Locator From Dependency Injection

| Comments

I’m sure like many exploring Dependency Injection (DI) for the first time the biggest problem I had, was how to rid the world of those pesky little new statements.  The first application I tried to use DI with in anger was DirLinker.  I had no prior knowledge of any of the DI frameworks and started looking at patterns that would help me remove all the new statements.  Unfortunately, I hit up on service locator and the rest is a mess history.  I ended up with code the looks like the following inside DirLinker:

IFile aFile = ClassFactory.CreateInstance<IFile>();
aFile.SetFile(file);

There is a couple of things that bother me about this.  First is the static, I’m not a fan of statics at all, infact, I hate them.  Next is the SetFile call, for any meaningful work to be done with this class you need to pass in a filename and it should be a constructor argument.  In all fairness the SetFile problem is more to do with the limitations of my roll your own DI framework than anything else.

In spite of my niggling doubts, it worked so I left it alone.  Fast forward to last week when I came across a blog post entitled Service Locator is an Anti- Pattern. This confirmed my initial discomfort and highlighted a problem I hadn’t thought of, namely that the class doesn’t advertise its dependencies.  Very bad for reuse.

Factories

As the blog post suggests a better way of doing this would have been to use factories so lets go over a couple of examples on how this could be achieved.

Virtual Instance Methods

This is a method I would use when trying to a create a seam in legacy code to inject a dependency for unit testing.  To do this you create a virtual method on the class that needs to create the object. In the unit test inherit from the class under test and override the factory method to return your mock or stub.  For example:

public class FileConsumer
{
    protected virtual IFile GetFile(String filename)
    {
        return new FileImp(filename);
    }  
    public void DoSomething()
    {
        IFile fileA = GetFile("test.txt");
        IFile fileB = GetFile("test2.txt");
        //uses files
    }
}  
[TestFixture]
public class FileConsumerTests : FileConsumer
{
    protected override IFile GetFile(string filename)
    {
        return new FileMock(filename);
    }  
    [Test]
    public void DosomethingTest()
    {
        //Code to perform test
    }
}

This is good because it’s clear to any maintainer what the dependency is and where it is coming from.

Abstraction Factory Pattern

This would loudly and proudly advertise the dependency on the ability to create IFile objects.  It works by creating a class with methods solely responsible for creating IFile objects and then taking this as dependency for your class.  For example:

public interface IFileFactory
{
    IFile CreateFile(String filename);
}  
public class FileFactory : IFileFactory
{
    public IFile CreateFile(string filename)
    {
        return new FileImp(filename);
    }
}  
public class FileConsumer
{
    IFileFactory _fileFactory;  
    public FileConsumer(IFileFactory fileFactory)
    {  
        _fileFactory = fileFactory;
    }  
    public void DoSomething()
    {
        IFile fileA = _fileFactory.CreateFile("test.txt");
        IFile fileB = _fileFactory.CreateFile("test.txt");
        //uses files
    }
}

This is the pattern recommended by the blog post and it covers all the things I don’t like about Service Locator.  But I think we could take advantage of some of C#’s language features and the fact we are creating objects via a container to achieve something similar but with less code.

Delegate Factories

Before I go into this a little disclaimer, this is totally and utterly inspired by Autofac’s wonderful generated delegate factory functionality, you can read more about it here.

OK, honesty out of way lets look at what the container knows and what it does. The DI container knows what Interfaces should map on to what concrete types and it has the ability to resolve constructor arguments for types that it knows about.  This is just a generic version of a factory so it makes sense to take advantage of it.

One way would be to use Func() (and its friends).  If the class being constructed requires a constructor parameter of Func where TResult is the required interface, the container could generate an appropriate delegate at runtime and pass it in.  For example:

   public class FileConsumer
   {
       Func<IFile> _fileFactory;  
       public FileConsumer(Func<IFile> fileFactory)
       {  
           _fileFactory = fileFactory;
       }  
       public void DoSomething()
       {
           IFile fileA = _fileFactory();
           IFile fileB = _fileFactory();  
           //uses fileA
       }
   }

This would require a slight modification to the container and could be extended by the use of Func<T, Tn, TResult>(T value, Tn valuen).  The container would match the parameters to a suitable constructor.

This is quite a powerful and time saving concept but I don’t like the vagueness of Func(), it’s not as expressive as it could be.  So the method I have chosen for DirLinker is to use strongly typed delegates and generate them at runtime from the container.   It works in the same manner only you declare the delegate up front and pass them in as an argument.  This, also, requires a bit of configuration but the pay off is worth it.  I have started work on the implementation for DirLinker so this example is taken directly from the unit tests and can be found here.

interface ITestClassWithDelegateFactory { ITestClassFactory Factory { get; set; } }
delegate ITestClass ITestClassFactory();  
class TestClassWithDelegateFactory : ITestClassWithDelegateFactory
{
     public ITestClassFactory Factory { get; set; }
     public TestClassWithDelegateFactory(ITestClassFactory delegateFactory)
     {
         Factory = delegateFactory;
     }
}  
[Test]
public void ManufactureType_Type_delegate_factory_manufactures_correct_type()
{
    IClassFactory testClassFactory = new ClassFactory();  
    testClassFactory.RegisterType<ITestClass, TestClass>()
                .WithFactory<ITestClassFactory>();  
    testClassFactory.RegisterType<ITestClassWithDelegateFactory, TestClassWithDelegateFactory>();  
    ITestClassWithDelegateFactory manufacturedType =
            testClassFactory.ManufactureType<ITestClassWithDelegateFactory>();
    ITestClass instance = manufacturedType.Factory();  
    Assert.IsInstanceOf(typeof(TestClass), instance);  
}

Admittedly there is quite a lot going on here but the main points are we registered a type and factory in a strongly typed manner with the container. Then pull it out and use it to create an instance of a different type.  I am going to cover this in quite some detail along with the how to create strongly typed delegates at runtime in my next post.

Don’t Be a Fool, Wrap Your Tool!

| Comments

As a hormone ravaged teenage, I squirmed uncomfortably as parents, teachers and community health practitioners imparted the words of wisdom “Don’t be a fool, wrap your tool”.  So it is fitting, I’m equally as squeamish when coming across the same advice as an adult.

What am I talking about?  Creating wrappers for anything at all on the boundaries of your code for the purpose of unit testing.  I’ve been struggling to think of a succinct way to explain this so I decided to go through a worked example.

Consider the following code:

   public class CommandReceiver
    {  
        public void WaitForMessage()
        {
            using (NamedPipeServerStream pipeServer = new NamedPipeServerStream("testpipe", PipeDirection.In))
            {  
                // Wait for a client to connect
                Trace.Write("Waiting for client connection...");
                pipeServer.WaitForConnection();  
                Trace.WriteLine("Client connected.");
                try
                {
                    // Read user input and send that to the client process.
                    using (StreamReader sr = new StreamReader(pipeServer))
                    {
                        String command = sr.ReadLine();
                        DispatchCommand(command);
                    }
                }
                catch (IOException e)
                {
                    Trace.WriteLine("ERROR: {0}", e.Message);
                }
            }
        }  
        private void DispatchCommand(String command)
        {
            //Knows how to deal with messages.
        }
   }

Looking at this code there is a number of problems but lets focus on the unit testing problems.  It is impossible to unit test this code because it creates a real pipe server and waits on a blocking call before continuing.  This means to test this code we need to create a pipe client, connect and all this would have to be threaded because of the blocking call.  Of course, this would make a good integration test because it tests that the pipe is connectable and receives a string message.

Before we set about making this code more unit test friendly, lets look at what we are trying to unit test.  We are trying to test that the WaitForMessage method can receive a string and pass it on.  For us to do this we need to abstract the pipe and stream out.  Also, while we are there lets remove the DispatchCommand method since it violates SRP and it would be more testable on its own.  So lets take a second stab at the code.

public interface INamedPipeServer : IDisposable
{
    void WaitForConnection();
}  
public class ManagedNamedPipeServer : INamedPipeServer
{
    private NamedPipeServerStream _pipeServer;  
    public ManagedNamedPipeServer(String name, PipeDirection pipeDir)
    {
        _pipeServer = new NamedPipeServerStream(name, pipeDir);
    }  
    public void WaitForConnection()
    {
        _pipeServer.WaitForConnection();
    }  
    public void Dispose()
    {
        _pipeServer.Dispose();
    }  
}  
public interface IStreamReader: IDisposable
{
    String ReadLine();
}  
public class ManagedStreamReader : StreamReader, IStreamReader
{
    public ManagedStreamReader(Stream stream) : base(stream)
    {}
}  
public class CommandReceiver : ICommandReceiver
{
    INamedPipeServer _NamedPipeServer;
    ICommandDispatcher _CommandDispatcher;  
    public CommandReceiver(INamedPipeServer pipeServer, ICommandDispatcher dispatch)
    {
        _NamedPipeServer = pipeServer;
        _CommandDispatcher = dispatch;
    }  
    protected virtual IStreamReader GetStreamReader()
    {
        //Code to create a stream reader from the pipe
    }  
    public void WaitForMessage()
    {
        // Wait for a client to connect
        Trace.Write("Waiting for client connection...");
        _NamedPipeServer.WaitForConnection();  
        Trace.WriteLine("Client connected.");
        try
        {
            // Read user input and send that to the client process.
            using (var sr = GetStreamReader())
            {
                String command = sr.ReadLine();
                _CommandDispatcher.DispatchCommand(command);
            }
        }
        catch (IOException e)
        {
            Trace.WriteLine("ERROR: {0}", e.Message);
        }
    }
}

This is where my uncomfortable squirm returns because my “keep-it-simple” sense is tingling.  I’ve just taken a fairly simple class that was around 30 lines and turned it into 75 lines of complicated OOP code.  I would hope that anyone reading this can follow it but in a real project this will probably be split over several files and the two styles of wrapping (because NamedPipeServerStream is sealed) can add significant cognitive burden to understanding what is going on.

What benefits does this bring?  It allows us to unit test that we read a string and pass it on to a dispatcher.  But I would go out on a limb and say that this is least likely of all the code in that class to go wrong.  The real problem area will be in connecting the pipe and reading from it.  We can make assumptions about failure conditions from the docs but as we all know docs != reality.

Is the abstraction here a benefit? In my opinion, not to the extreme level we have. The abstraction at ICommandReceiver will allow us to swap out how the application does IPC calls making it flexible in the future and, as I eluded to above, the unit test ‘coverageability’ is of lower value in this instance.

My point in all this rambling nonsense?  In an ideal world we would have both sets of tests and the unit tests would cover all the error conditions but in the real world we only have a finite and, usually, short amount of time with pressure from project managers and deadlines.  So we have to look at what will bring us the most value and focus on that.  I believe at application boundaries like this one we should focus our attention on writing integration tests because it will bring us more value in the long run.  I would not shun unit testing entirely and in this example the ICommandDisptacher and other supporting classes would have a full suite of unit tests.

As a side note, my final version of the code would be a mid way point between the two listings.

Why DirLinker Doesn’t Support XP

| Comments

I posted up the Directory Linker CodePlex project a little over 2 weeks ago and it has clocked up a reasonable 103 downloads in that time.  Last night I was glancing over where the traffic came from on the stats page and a lot of referrals have come from this review site, they state that:

The app works on Windows 7 and Vista only, support for Windows XP is planned in the near future.

This is incorrect and maybe I should have been more clear that Directory Linker will not support XP unless there is some overwhelming outcry for it or someone else does it.

Why?

First for selfish reasons, I don’t use a machine that has XP installed anymore, furthermore, I would encourage anyone on XP to upgrade for security if nothing else.  At the end of the day, I created this tool for my use and I don’t use XP.

The second reason is technical, without going to far in, symbolic directory links have been in NTFS for quite a while but only got ‘first class’ support by the OS recently.  Previous to Vista they were called Junction Points and had all kinds of weird side effects (more details here). Also, the API calls used by Directory Linker to create the symobolic link is only supported by Vista and above.  It is technically possible to do the same for XP and it is not difficult but Directory Linker simply doesn’t do it.

I hope that clears things up, also if you find a download link for Director Linker that links to SoftSea in anyway please do not use it.  If you’re not a programmer it’s safe to stop reading now.

Fork You!

Well, I rather you didn’t.  I will happily accept any patch that is submitted to the project to do this.  To help you out here is a quick overview on how to do  it.

  1. Locate FolderImp.cs and make the CreateLinkToFolderAt method virtual.
  2. Create a new class that inherits from FolderImp and name it to show it is for XP.
  3. Override only the CreateLinkToFolderAt and add the code to create a link in XP (Tip: start at DeviceIoControl.aspx)).
  4. Open Program.cs and go to the FillIoCContainer method and add code to conditional select the correct IFolder class when the application starts.

Quite simple really.

User Group Meetings Brought Out the Best in Me

| Comments

I am a fairly junior developer with just short of 3 years commercial experience in an office where the average experience level is 10+ years.  I love to read and learn about software development nearly as much as I love to actually write code and I have lots of ideas and opinions on code, style, design, etc,. For several years I’ve been trundling along nervously squeaking up to more senior staff members and often being shot down or told “that’s a great idea but we don’t have time/resource/it doesn’t fit in with what we do.”.  This has all lead to a mixture of frustration and self doubt as to if what I am learning is right or even unprofessional at times. Enter NxtGenUG Manchester.

What Is It?

NxtGenUG is a .NET users group that meets once a month.  The meetings generally consist of a short member ‘nugget’ (~10 minutes), a longer ‘featured’ speaker, eating pizza, giving away swag and most importantly, yes even more important than pizza, socialising with other interested developers.

The nugget is a short talk from a member about something they are learning about or know about.  A wide range of topics have been covered from printing in SilverLight to the Pomodoro technique.  While ten minutes is not a very long time it does give a wonderful little introduction to a topic and provide a talking point over pizza.

The main attraction, as it were, is a full hour long presentation on a topic. There has been some quite interesting presentation from varying quality presenters.  The highlights including a crash course in TypeMock and a tour of PLINQ.  Even if the topics are not things you work with directly, it’s a great way to learn about new things and keep up to date.

#

Why Has It Brought Out The Best In Me?

While all the presentations and information have been great what has really helped me is the social side of things.  I do a lot of reading and learning and I form opinions based around it.  What I really want is place to voice these opinions and get a good debate going with like minded people.  I am a true believer in “Strong opinions, weakly held” and from this point of view the conversation I’ve had while at the user group have really challenged me and changed my thinking, for the better.

Don’t get me wrong people at work are interested as well, but it’s nice to see and hear other people in the community talk about their experiences and the challenges they’ve faced introducing Agile, TDD, etc,.  This type of information sharing is invaluable and really allows you to see things from a completely different point of view, than chatting with colleagues would normally do.   For example,  I’ve gotten several invaluable tips for guerrilla tactics to introduce things in a resistant culture.

More importantly for me, it’s given me the confidence to fight my corner more in the office and really push forward.  Listening to other people talk and talking to speakers, I’ve realised that you don’t have to be right all of the time, just open minded and have confidence in your own knowledge.  I have taken this to an extreme level by presenting my own member nuggets at the user group, starting this blog and running several training sessions/OpenSpaces discussions at work. Now I’m looking toward getting on the amateur speaking circuit.

In summary, just being around other people from different work environments and cultures to mine has really driven me and given me the confidence to push forward with change.  User groups aren’t just about the presentations, they are about the community and improving yourself and workplace through other peoples experiences.   For me this has been a massive success so thank you Steve, Andy, John, Joel and countless others.

DirLinker: An Update

| Comments

In my previous post introducing DirLinker I posted up a short to do list and I can happily check off two of the shorter items.

  • Remove the dependency on Telerik and tidy up the UI –> Done, well it’s still not pretty but it doesn’t have a typo this time ;)
  • Post the source up on Codeplex –> http://dirlinker.codeplex.com/ 

Whether the rest will ever get done is another matter.  If you want to contribute then please go ahead.