Humblecoder

Caution, this blog may be ironically named

Multiple Asserts

| Comments

I’ve read many books and blogs that advocate only having one assert in a unit test and lots of people take that to mean literally assert statement.  I’ve always disagreed with taking it literally, I’ve always thought of it as one logical assert, as in you assert one concept at a time which could lead to multiple assert statements.

The main arguments against having more than one assert statement seems to be it’s not as readable and it’s sometimes difficult to understand what is failing because of it.  My normal response is to create my own asserts that accurately describe what the multiple asserts do and hide the real asserts in there.  For example:

   1: public void AssertIsValidClone(Customer oldCustomer, Customer actualCustomer)




   2: {




   3:     Assert.AreNotSame(oldCustomer, actualCustomer);




   4:     StringAssert.AreEqual(oldCustomer.Name, actualCustomer.Name);




   5:     StringAssert.AreEqual(oldCustomer.Address, actualCustomer.Address);




   6: }




   7:  




   8: [Test]




   9: public void Clone_ValidCustomer_ValuesAreTheSameReferenceIsDifferent()




  10: {




  11:     //Some setup




  12:  




  13:     var result = aCustomer.Clone();




  14:     




  15:     AssertIsValidClone(aCustomer, result);




  16:     




  17: }

Ok so this is a very contrived example but we can clearly see what the intent of the assert is rather than having several making it harder to understand. But what this doesn’t do is address the second concern. IE anyone of those three asserts could fail, so we fix it then the next fails, etc.

Enter an nUnit plug in called OAPT, this allows you to have multi asserts that generate multi unit tests in the runner so you can see exactly what is failing.  I won’t warble on too much about the details because it’s all in the link But let’s just rewrite our unit test:

   1: [Test, ForEachAssert]




   2: public void Clone_ValidCustomer_CloneIsNewItemWithValidData()




   3: {




   4:     //some setup




   5:     




   6:     var newCustomer = originalCustomer.Clone();




   7:  




   8:     AssertOne.From(




   9:         () => Assert.AreNotSame(originalCustomer, newCustomer)




  10:         () => StringAssert.AreEqual(originalCustomer.Name, newCustomer.Name)




  11:         () => StringAssert.AreEqual(originalCustomer.Address, newCustomer.Address));




  12: }

Much more concise and it will run as three separate tests.  I still do have an issue with it though, each test uses the test name with an appended number. It would be nice if you could pass in some text for it to append.  But then again it is open source so maybe I could add that feature myself :)

Directory Linker 2.1 – XP Support

| Comments

Today I have pushed new binaries to CodePlex for DirLinker.  This new release brings support for folder links in Windows XP/2003.  It is not able to create file links, this is because of the limitations in reparse points in earlier versions of Windows.

This is something I didn’t think I would do but after releasing Dirlinker 2 on Codeplex, a ticket was raised in the bug tracker because it was failing on XP and I was chatting to a friend on IM about it who basically said “Well why doesn’t it?”.  The main reason was because the API call for creating symbolic links is only available in Vista and later.  XP does have an equivalent but the behaviour of the links they create is subtly different.  In XP they are Reparse Points where as in Vista+ they are hard links (similar to *nix), I will go in to the difference in a future post.

It turns out that with a little help from a CodeProject article, it took less than an hour to put in and test, so it made it in.  I am definitely parking this to new features now.  Only bug fixes will be added from now on.

Directory Linker 2

| Comments

After literally months of procrastination Directory Linker 2 is finally in state that I’m not too ashamed of.  So today I have posted up new binaries on Codeplex.

What’s New?

  • Undo Support  If the process of moving and deleting a folder before creating a link at the same location failed, you could end up with some files in the new location, some in old and two partial directory structures.  If this happens now DirLinker will offer to put the original folder back how it was.

If you’re using the just delete option and it fails, undo can not undelete any files but it will put back any folders it deleted.

  • File Links – It can now create symbolic links for files as well as directories.  You don’t have to do anything different, just select a file in the link location or the link to field.  There has been a small change to the UI to allow you to browse for files aswell as folders.

In a future post I’m going to talk about the difference between symbolic links and shortcuts.  But for now the important difference is the application opening file doesn’t know the file is only a link when using symbolic links.

  • Progress Window Changes – The progress window has been slightly overhaul and now keeps a list of everything it has done.  So if it does fail or something goes wrong, you can work out exactly what it’s done.

Progress Window

With these features I’m planning on parking Directory Linker development, I will of course fix any bugs that may come up but I can’t see any new features being added.

Enjoy!

PS, If you have no idea what Directory Linker is, this is a good place to start.

Visual Studio Versions || .Net Versions != C# Version

| Comments

Update Generic variance can be used (and a couple of other things) when multi targeting see here: http://blogs.msdn.com/ed_maurer/archive/2010/03/31/multi-targeting-and-the-c-and-vb-compilers.aspx

I’m finally getting stuck into finishing off DirLinker 2.0 and with VS2010 being released I decided to upgrade the project to VS2010 still targeting .NET 3.5 for compatibility.  While enjoying the new IDE features, I discovered some of the C# 4.0 features work when targeting .NET 3.5.

#

Optional and Named Parameters

This is something I’ve been looking forward to, I think it will make my code prettier by removing the ridiculous number of overloads you can some times end up with.  I’m not going to explain the feature because it has been well covered by better writers than I :).  So imagine my surprise when I discovered I could use this feature while targeting .NET 3.5.  Just to test the theory I wrote the following console application and targeted it at .NET 2.0:

static void Main(string[] args)
{
    FunctionCalledUsingNamedParams("str1", "str2");
    FunctionCalledUsingNamedParams(String2: "str2", string1: "str1");  
    FunctionWithTwoOptionalParams();
    FunctionWithTwoOptionalParams(200);
    FunctionWithTwoOptionalParams(56, "Called from main");
    FunctionWithTwoOptionalParams(message: "test", number: 29);  
    Console.ReadKey();
}  
static void FunctionCalledUsingNamedParams(String string1, String String2)
{
    Console.WriteLine(String.Format("{0} : {1}", string1, String2));
}  
static void FunctionWithTwoOptionalParams(Int32 number = 1, String message = "default message")
{
    Console.WriteLine(String.Format("{0}: {1}", message, number));
}

This compiles, runs and outputs the correct information just fine.  I even ran it to a machine that had never seen .NET 4.0 to be sure.  It would appear it’s C# 4.0 feature not a .NET 4.0 feature.  I have only tried this with optional params but I doubt that dynamic and co/contra-variance will work, I think the general rule is if it doesn’t require the Base Class Library or CLR support then it will work.

(The source for app along with a compiled version is available here )

Of Course This is Not New

Within the past 12 months my work place has moved to VS2008 from VS2005 but still targeting .NET 2.0.  One of the things I quickly discovered was that lambdas, auto properties and  object initialization syntax all still works perfectly when targeting .NET 2.0 from VS2008.  Making them a feature of C# 3.0 not .NET 3.5!

So it’s important to remember this simple expression:

Visual Studio Version || .Net Version != C# Version

Why Can’t TFS Remember My Credentials?!

| Comments

At the office we use TFS and it pesters me for credentials every time I start Visual Studio because I’m not on the domain.  This does quickly become very tiresome.

Solution

I set about trying to cache my credentials this morning after, mixed with a case of the Mondays, I’d finally had enough with it pestering me.  The dialog does not have any remember my password option so the next stop is to save it in my Windows Profile.  To do this:

  1. Open “Control Panel”
  2. Go to “User Accounts” and select the option “Manage your network passwords”
  3. In the dialog enter the path to your TFS server and your credentials, for example Add Network Password

This worked great for Visual Studio but I still had to log into the TFS Sharepoint portal site every time.  I discovered you also have to enter the TFS server address into the “Intranet Zone” in “Internet Options”.  To do this:

  1. Open “Control Panel”
  2. Go To “Internet Options” and select the “Security” tab
  3. Then select “Intranet Zone” and click on the “Sites” option
  4. Now enter the address of the TFS server, for example: Intranet Zone

Then you should never be harassed again :)  This works / is needed on Sever 2008 and Vista, as per usual the story is a lot better under Win 7 and Server 2008 R2.  It’s just a shame my main development VM is 2008 and I don’t have time to reimage it.

OCInject Release 2

| Comments

When I originally released OCInject I omitted one important feature, lifestyle management.  This coupled with the release of a feature full TinyIOC has made me re-evaluate my position on not adding too many features to OCInject.  Release 2 of OCinject brings the following features:

  • Life style management – It’s possible to register types as singletons or instance
  • Func factories – OCInject from day one supported delegate factories, now you can use Func instead of typed delegates
  • Simplified Registrations – In the previous release of OCInject types are registered using TContract –> TImplemenation to enforce programming to interface.  It’s now possible to register with TContract as the concrete type with one call
  • Largest Resolvable ConstructorIn the previous release of OCInject it simply grabbed the first constructor it found.  It will now select the greediest constructor it can resolve.  It makes the assumption that any known types are resolvable, for performance reasons.
  • Unresolveable CallbackIt is now possible to supply a call back function if the container can’t resolve a type.
  • Child Container Support – OCInject can create child containers that call back to the parent for any unknown types. 

The latest stable version can be downloaded from Codeplex and all stableish development releases can be found at BitBucket.

#

Future Features

One major feature still missing from OCInject is named registrations.  This is because I personally dislike ‘magic strings’, with this in mind a planned future feature of OCInject is factory delegate registration only.  Also, auto generated factories from interfaces.  More to come on this in a future post.

Life Style Management

By default all types registered with the OCInject container are transient. You can register a type as singleton in two ways.  The first method is to use .AsSingleton() this will cause the object to be created the first time it is requested.  The second method is to use .AlwaysReturnObject(obj), this will return the instance you specified.   When using either method, if the type implements IDisposable it will be disposed when the container is.  Usage example:

ClassFactory container = new ClassFactory();  
//Normal Singleton
container.RegisterType<TestClass>()
           .AsSingleton();  
//Preconstructed Singleton
AnotherClass instanceOfAClass = new AnotherClass();  
container.RegisterType<TestClass>()
         .AlwaysReturnObject(instanceOfAClass);

Func Factories

When resolving constructors if OCInject discovers a Func where T is a registered type, it will pass in a func to create the type.  This uses the standard container resolve, so if T is a singleton you will always get the same instance when the factory is called. Usage example:

class FuncConsumer
{
    public FuncConsumer(Func<TestClass> factory)
    {
    }
}  
ClassFactory container = new ClassFactory();  
container.RegisterType<FuncConsumer>();
container.RegisterType<TestClass>();  
//Successfully created with the ability to create TestClass
FuncConsumer f = container.ManufactureType<FuncConsumer>();

Simplified Registrations

Registrations no longer require the separation of contract and implement so just an implementation can be registered.  Usage example:

ClassFactory container = new ClassFactory();  
container.RegisterType<TestClass>();  
var t = container.ManufactureType<TestClass>();

Largest Resolvable Constructor

This is quite a complicated area that is worthy of a blog post itself but OCInject’s behaviour has changed.  When creating a type the constructors are ordered so the largest, in terms of parameters, is first.  It then looks at each parameter and to see if it can resolve it, first by checking ‘resolve time args’ (values passed in when the resolve is requested, normally from generated factories) then by seeing if the type is a registered contract within the container.   It does not check if the type can be created just that it knows about it, if it’s registered it assumes it can be created.

The first completely resolvable constructor will be used to construct the type.

Unresolveable Callback

If the container is unable to resolve the type you can now register a function that is called before the container throws an exception.  To do this you need register a Func<Object, Type> with the CallToResolve propriety.  Returning null will cause the container to throw an exception.  Usage example:

ClassFactory container = new ClassFactory();  
container.CallToResolve = (type) => { return new TestClass(); };  
ITestClass manufacturedType = factory.ManufactureType<ITestClass>();

Child Container Support

Calling CreateChildContainer() will return a new ClassFactory object with no registrations but a link back to the parent.  If a type is not known to the child it will ask the parent to fulfil the request.  Any registrations with the child will not effect the parent and singletons registered with the child will be disposed when it is. Usage example:

ClassFactory container = new ClassFactory();
ClassFactory child = factory.CreateChildContainer();

“Hg Push -b Default” Is Massively Handy

| Comments

A little while ago I wrote a quick start guide to branching in Mercurial and, as is normally the case when you don’t actively follow development, Mercurial 1.5 has been released with a lovely new feature which impacts working with branches.

My biggest annoyance when working with branches is by default all branches are pushed/pulled to the remote repository.  When in reality, I often want to just push the branch I’m currently on.  Mercurial addressed this by adding a branch only option to Push, Pull, Clone, Incoming and Outgoing.  To use the branch only option you append –b to the command followed by the branch name, for example:

hg push -b default

This will only push the default branch back to where the repository was cloned from (You can still specify a location to push to if required.)  This works exactly the same for all the other commands.

A handy shortcut to push the current branch is use ‘.’ as the branch name:

-b .

#


This will perform the command only to the branch you’re currently on.  Since I tend to push more than I do most of the others, I have set up an alias to map push current branch to pc.  You can do this by adding a section to your hgrc like:

[alias]
pc = push -b .

Branching Without Having It In Your History

Cloning locally has always been an option to create a branch or fork of the code that is still linked to the original code base, IE you can push and pull to it, but it doesn’t become part of your history in the other repo unless you pushed it back.  But I’ve never liked it because it takes a copy of everything and sets the current branch to the default branch.  Trivial, maybe, but not something I found desirable.

The new branch options makes cloning locally a lot more attractive to me.  It means I can just clone a branch, make a quick change or two and either merge it back in or delete the directory and pretend I never had that idea!

****

Powershell and Mercurial

As part of my continuous improvement I’ve been learning PowerShell.  One way I’ve done this is by replacing all my cmd.exe usage with the PowerShell prompt instead.  This led me to discover an excellent cmd-let (script or whatever the proper name is, I’m still learning :)) to display the name of the current branch and status of it when you’re in a Mercurial repository.  It looks like:

Powershell Status Display More information can be found about this here.

Passing Reference Types Using Ref, Take Two

| Comments

In my last post I talked about passing reference types using the ref keyword but it didn’t make a lot of sense.  So I just want to go over it again, hopefully making a bit more sense.

When a method is called in C# a copy of the all parameters are given to the method.  This is fairly obvious with value types because if we change the value of an int, for example, the caller does not get the updated value.

However this is not so clear for reference types.  The called method can update the state of the object it was passed, for example append extra data to a StringBuilder, and the caller’s object will have these updates.  This can lead to confusion about what is really happening, it looks as if the StringBuilder was passed by reference but a copy of the reference to it was taken.

It maybe subtle semantics under normal use but it becomes key to understanding behaviour when the ref keyword is used.  For value types this means that if we increment an int we are passed, the caller will have the new value too. For reference types the reference we are passed is actually a reference to the caller’s reference. Meaning if we assign a new reference to it, the caller will get the new reference.

We can demonstrate this with the following code:

static void Main(string[] args)
{
    StringBuilder sb = new StringBuilder();
    sb.AppendLine("Added by Main");  
    AddToSBPassedAsNormal(sb);
    Console.Write(sb.ToString());
    //Output: Added by Main
    //        AddToSBPassedAsNormal  
    AddToSBPassedByRef(ref sb);
    Console.Write(sb.ToString());
    //Output: AddToSBPassedByRef  
    AddToSBPassedAsNormalNewUsed(sb);
    Console.Write(sb.ToString());
    //Output: AddToSBPassedByRef  
    Console.ReadLine();
}  
private static void AddToSBPassedAsNormal(StringBuilder sb)
{
    sb.AppendLine("AddToSBPassedAsNormal");
}  
private static void AddToSBPassedByRef(ref StringBuilder sb)
{
    sb = new StringBuilder();
    sb.AppendLine("AddToSBPassedByRef");
}  
private static void AddToSBPassedAsNormalNewUsed(StringBuilder sb)
{
    sb = new StringBuilder();
    sb.AppendLine("AddToSBPassedAsNormalNewUsed");
}

From the code above we can see that the StringBuilder after the first method contains both strings.  But after the method call, where it is passed by ref, the previously entered data has been lost and, finally, using new when not being passing by reference has no effect on Main()’s reference to the StringBuilder.

Before passing a reference type using the ref keyword you must think carefully about the implications of the caller changing the reference.  As it can lead to some esoteric and difficult to track down bugs.

C# Basics: Ref’ing References, Ref’ing Hell

| Comments

For the past couple of weeks I’ve been deep in some legacy code.  The code has all kind of hidden charms, while I’m not going to be overly critical because it was written at a time when a .NET 2.0 application was cutting edge.  I uncovered this gem:

public void SomeHighLevelFunction(out String feedback)
{
    StringBuilder mySb = new StringBuilder();  
    _WorkerItem1.DoWorkOne(ref mySb);
    _WorkerItem2.DoWorkTwo(ref mySb);
    _WorkerItem3.DoWorkThree(ref mySb);  
    feedback = mySb.ToString();
}

Ignoring the void with the out String, why pass the StringBuilder by ref? This code was written by a migrating C++ programmer; if you’ve worked with C++ at all a little light bulb may have just gone off for you.  It works, even though it might be the very definition of programming by coincidence, so why is this so bad?

#

Why So Bad?

When calling a method in C# all parameters are passed by value.  It is a common misconception that reference types are passed by reference when they are infact passed by value.  This backs up the position of the C++ programmer, so what is the difference?

C# and its documentation has no concept of pointers (outside of IntPtr) when they are used extensively under the hood.  When you call a method and pass a reference type you are actually passing a pointer (that lives on the stack) to a memory location on the heap.    The pointer is copied so you can still manipulate the memory but you can’t affect the reference.  For example:

static void Main(string[] args)
{
    StringBuilder sb = new StringBuilder();
    sb.AppendLine("Added by Main");  
    AddToSBPassedByRef(ref sb);
    AddToSBPassedAsNormal(sb);  
    Console.Write(sb.ToString());  
    Console.ReadLine();
}  
private static void AddToSBPassedByRef(ref StringBuilder sb)
{
    sb = new StringBuilder();
    sb.AppendLine("Added by AddToSBPassedByRef");
}  
private static void AddToSBPassedAsNormal(StringBuilder sb)
{
    sb = new StringBuilder();
    sb.AppendLine("AddToSBPassedAsNormal");
}

Has the output of:

Added by AddToSBPassedByRef

This could be confusing to a C++ developer because all classes, in C++, are created on the stack unless you declare a pointer and use new to create them on the heap.  Having said that a simple rule applies to both to C# and C++, if you have to use new it gets created on the heap and everything else is created on the stack.  Anything on the stack, value types or reference types, is copied between method calls.

Pair Programming, Peach or Plum?

| Comments

I’m currently in my third week of pair programming and I wanted to talk about some of my experiences, both positive and negative.  I pushed for a pair programming ‘experiment’ at work but I was dubious as to its value and usefulness in the cut and thrust world of professional software development. So the justification for it seems a good place to start.

Two Expensive Resources Working Slower, Yeah Right!

Let’s not beat about the bush, pair programming means that two expensive resources, developers, are tied up on a task without necessarily bringing two ‘man days’ to a task.  What pair programming aims to do is reduce the overall cost of ownership of code.  It can work as a mentoring technique to develop less experienced programmers but this maybe defeating the object of the type of pair programming that really brings value.

In a traditional software development lifecycle the cost of fixing a bug rises exponentially as you move through the software’s (or feature’s) lifecycle. I’m sure we’ve all seen a graph similar to below at some point in our careers. It shows the costs of fixing a bug versus the stage development is currently in.

costCurve3

The aim of pair programming is to raise the costs slightly at the start, with a view to reducing the sharp curve towards the end.  So we end with a smoother curve and we spend list time in the costly area of the graph.

The Good, the Bad and the Ugly

My experience has been overwhelmingly positive, but the ones that stick out the most are:

  • More Focused – I’ll admit I have problems concentrating at times, I’ll see a tweet, Google something related to what I’m doing and find an interesting blog, etc.  Working in a pair has really focused me on the task at hand and I think I’m more productive.
  • Less Rabbit Holing – I’ve found just verbalising my ideas about what I want to do with a design, refactoring or while investigating a bug helps to work out problems or see things I’ve missed.  Also, the navigator can ask questions about less clear areas.  All leading to a better design, bug fix, etc.
  • Differing Styles – Participating in or watching someone else’s thought process has helped show me areas where I could improve and given me new approaches to try when blocked.
  • Naming – One thing I struggle with at times is how best to name things to best express my intentions, having a partner to ask “this is what this does, do you think that’s descriptive?” .  All this leads to better and more maintainable code. The not so good:

  • Breaks – It can be awkward to be stuck to someone else’s brew up times, smoke breaks, lunches, etc.  It can break the flow and make it feel like you’re spending time waiting on others.

  • Thinking Time – This is very subjective and probably should be a pair activity, but sometimes when confronted with difficult to understand legacy code, a strange bug or just several choices.  It’s nice to go through your ritual, whether that be listen to your favourite music, rock the water cooler or chat to that cute girl in marketing, it all has the same outcome you feel better and have had thinking time.  I’ve found this particular hard when your partner is looking, listening, questioning and waiting for you to carry on. Overall, it’s definitely been a worthwhile experience and one that I will be continuing.

Remember It’s Not For Everyone

We’ve been careful to explain to people that it is not an experiment with a view to introducing it across the team, just something we wanted to do.  It started off with just two of us but a third has joined with one or two more interested in joining if the opportunity came up.  But it’s important you don’t force the issue.  For it to be a success you need people who:

  • Check their ego at the door – There is no room for “this is the way I do it and it’s the right way” one of the biggest take aways for me has been learning from someone else’s style and approach, even if I disagreed.
  • Are Interested – This is so important, for the navigator role to work well the person needs to want to do it.  Not just be sat thinking “God, when can I drive”.
  • Be Strong – I have a strong personality and it can over power people around me.  As much as I’ve tried to rein it in when working with less strong personalities, it’s still important the other person is strong and is willing to defend their opinion.