Friday, 14 December 2012

My Life in Computer Games: Part 2

This is a series of posts which chronicles my life as measured by the computer games I’ve played; you can find the first part here. Let’s continue the journey…

Perfect Dark

The N64 was starting to run out of steam but, like every console at the end of its lifecycle, it had a few last gems to dish out. One of them was Perfect Dark, the spiritual successor to Goldeneye.

It was basically the same kind of gameplay that Goldeneye introduced but in a different setting and tightened gameplay; it still had the mission difficulty of its predecessor but crammed in so many more features, allowed you to play in a higher resolution (640 x 480!) and even had voice samples – even if the voice acting was terrible at least it added a bit more to the atmosphere having guards complain about being shot.

Once again though the multiplayer was where it truly shined, not only by adding more weapons and options but by adding bots, allowing up to 12 players at once – 4 human, 8 computer controlled. This really changed the pace of the matches; Goldeneye had a more measured, strategic pacing but because Perfect Dark bots never stood still for longer than a second you always had to be on the run making things a bit more frantic; it is how I imagine PC multiplayer games like Doom and Quake to be.

Banjo Tooie

Image courtesy of Wallpaper Pimper

In 1998 Rare released they’re take on the 3D platformer, Banjo-Kazooie. I loved that game but I have far fonder memories of it’s sequel Banjo-Tooie (despite the ridiculous name).

Whereas the first game had big worlds to play in, this one had massive worlds; when playing through the first level I thought it would be a bit bigger, but when I got to the highest point and looked down I could see everything before me! That must have been quite an achievement for the N64. Not only were the levels huge but they were interconnected; unlike other games where each level/world was it’s own distinct area, this game connected each level at strategic points to give the illusion of it being a far bigger world. In fact some of the games puzzles relied on having to move about between levels; Chuffy the Train is one example.

Banjo-Kazooie also tried to have it’s own “Lock-on Technology” moment by introducing “Stop ‘N’ Swop”, the idea being that once you finished the first game you could unlock secrets that would carry over to the sequel, though it never really worked out in the end. Banjo-Tooie did make reference to it but it was entirely self-contained in it’s own game. Shame.

Metal Gear Solid 2

Now this is a bit different! I never actually owned this since I never had a PS2 but my friend did and we spent ages playing through it. I had seen trailers of it and it looked amazing! Everyone was hyping this game up. So when we started it instead of playing as the legendary Solid Snake we had… a girly, floppy haired man with girlfriend issues. Eh?

It turns out it was all part of a ruse. All the E3 trailers showed Solid Snake back on form but actually the focus was not on him for most of the game: only if when starting a new game did you say you never played the original (like we hadn’t) did you skip the well know tanker part and instead went straight to the meat of the game playing as new spy Raiden. I can kind of understand it now; by changing the perspective you see the most important character in a new light. Still, it left me and my friend a bit perplexed initially, especially when I told him there was a whole other part of the game that he hadn’t experienced.

Nonetheless it was a really great game. The graphics were fantastic, the stealth gameplay excellent, the boss battles massively over the top and the cutscenes well presented, even if on the long side – I’m sure half the game was devoted to just watching it. My highlight though was towards the end when the game tried to play tricks on you, suggesting you “turn the console off now” or the “Fission Mailed” screen; for a game which played it straight most of the time it left us a bit shocked as to what to do.

Half Life

“They’re waiting for you Gordon, in the test chamber…”

Image courtesy of Giant Bomb

I was never really into playing PC games as it seemed each game I bought required a brand new set of hardware, which gets expensive quickly. But a few have caught my eye throughout time and this was one them.

Half Life tells the story of a scientist who has to escape the confines of a research facility once an accident happens and aliens start arriving. So far, so typical science fiction. What made this PC shooter stand out from the others though was it’s narrative and the fact that you were part of it; no cutscenes, no ever seeing what Gordon Freeman looked like, everything was played from your perspective.

There were also no levels, instead the entire facility was a level. The opening sequence demonstrated this brilliantly; as you were heading to work on the tram system you would see everyone else going about their jobs, passing through security sections, watching things move about. Despite the aging graphics it felt real, like this was a fully functioning place.

The enemy AI was fantastic too. Initially you met various aliens which felt reminiscent of Doom but eventually you met the Marines who were really smart; it took a lot of clever thinking to outsmart them sometimes.

Start Wars: Jedi Knight II

There have been a bazillion games based on Star Wars but how many of them can you remember that were even slightly good? This one, in my opinion, was one of the best because finally I could be a Jedi; not doing goody-goody two-shoes tasks like diplomacy like the prequel films would have you suggest, but doing what all kick-ass Jedis do by hacking and slashing Stormtroopers with a lightsaber.

Overall I just liked this game; the force powers were focused and the lightsaber duals were fun. The initial few levels forced you to play with weapons which was a shame but it was the lightsaber that stole the show.

Halo: Combat Evolved

Image courtesy of Pocket-Lint

A new kid came on the block; Microsoft thought they could do this video games thing just as well as Sony and Nintendo so invented the XBox, a huge black console which would leave a legacy. But every console needs a “killer app” to sell units, especially one that no-one had ever heard of, and that is what Halo became.

Lots of people played it for its multiplayer but I played it for its engaging story, the clever AI and most importantly its co-operative play. Me and a friend played through the co-operative game religiously and for once it made a difference; it wasn’t just about who could shoot more enemies but it helped with your strategy and we both looked for each others backs. It also has one of the best climatic endings for a game I’ve seen.

Half Life 2

Image courtesy of Voodoo Extreme VE3D

I played this version on the XBox and thought it was a great sequel to the original. The scope expanded from the Black Mesa facility to City 17, an unknown European city where the remaining humans were being rounded up by the invading alien forces. The opening in this game worked in a similar vein to the original, but this time focused on the humans milling around, listless and hopeless.

The real star of the show though was the physics and infamous “Gravity Gun”. This was probably the best weapon in the game yet didn’t fire a single bullet: instead it let you pick up nearly any object and fling it with force and anything you wanted. The Ravenholm section was the best example of this: flinging buzzsaws at the zombies and slicing them in half never got boring.

I also really liked the technology powering the game; having the characters watch you and lip synch perfectly was quite something I thought.

Next Time

The last part focuses on recent years, particularly the Wii. Stay tuned!

Friday, 30 November 2012

My Life in Computer Games: Part 1

I’m coming to a realisation that has slowly been dawning on me for a while now: I’m going to give up playing computer games. For someone who has been playing them since childhood that’s quite a big thing to say, but it is for several reasons.

First and foremost I do not have the time anymore. I’ve got a full-time job and a family to support so free time for me is extremely precious. I’m currently playing The Legend of Zelda: Skyward Sword, but I’ve been doing that since Christmas 2011! And I still haven’t finished! So far I think I’ve clocked up about 40 hours gameplay time but that is very much spread out across a year, playing maybe a couple of hours a week at best. Finding the time for anything more involving than Angry Birds is simply difficult these days.

But it’s not just a matter of finding the time anymore, I actually don’t feel that bothered about computer games anymore. In the limited free time I have left I would rather actually be doing something constructive (such as writing this blog) and learning more about programming. I’m still interested in how computer games are made – I found the code reviews that Fabien Sanglard did on the Quake and Doom game engines really interesting – and I still appreciate them – I think Halo 4 looks amazing. But playing them? Meh.

So I thought I would make a list of all the memorable games I’ve played during my life so far and reminisce, which is going to span a few posts.

The ZX Spectrum

When I first asked my parents for a computer to play computer games I thought I would be getting a Nintendo or a Sega console. Instead what I got was a ZX Spectrum +2, so not quite what I was expecting.

In the end it did shape my entire career by introducing me to programming, but as a game machine all I remember was having to load tapes which took minutes – and also provided a lovely whining noise and hallucinogenic loading screen – only for me to play it for 30 seconds and give up because I didn’t find it entertaining. Maybe the next one will be better… (wait another 5 minutes to load the next tape).

A ZX Spectrum loading a program. Wow, my head hurts…

So I gave up on the Spectrum and got myself a Sega Master System, which then led to a Sega Mega Drive, which then led to…

Sonic the Hedgehog

Back in the 16-bit days you were either with Nintendo or Sega, Mario or Sonic. I chose Sonic and have many happy memories of those games, because they were fast!

Sonic the Hedgehog 2 was my personal favourite, the video above showing one of my favourite levels due to the sheer speed you can crank up to – so fast that the screen sometimes can’t even keep up with you!

But the real gem of the series was Sonic 3 and Sonic & Knuckles. Both individual games on their own were great, but the “lock on” technology that Sonic & Knuckles brought made them combined into an amazing experience – extending the Sonic 3 game, or by joining Sonic 2 to it you could turn an old game into a brand new experience.

Recently I actually wondered how they even did the “lock on” bit; how do you turn one game into three games? And how do you take an old game which wasn’t even designed for this kind of thing and make it into what is effectively a brand new game? Turns out that actually it was a clever ROM trick, by joining ROM chips together to make a new one; this long forum post explains it in far more detail.

The 32-bit Years

After the Mega Drive I got a Sega Saturn – not sure why in retrospect and not a PlayStation which was the latest hotness at the time. There were some good games such as Sega Rally and NIGHTs Into Dreams but this was the advent of 3D graphics and things were just starting out. One game caught my attention though which made me rethink everything…


Need I say more? Oh, alright then

I remember going round to my friend’s house and they were playing GoldenEye. I had a go – the Nintendo 64 controller looked a bit weird to hold – but I started playing it and I realised quickly that I was completely hooked on the multiplayer. Every console before the N64 supported two players but this one could support four, meaning that multiplayer shooters actually made sense outside of a PC.

So I got an N64 with GoldenEye as my first game for it. The single player missions were great with lots of challenges added as the difficulty ramped up, but my single defining memory of this game was spending nearly all my free time between A-Level lectures playing deathmatch games against my friends. And beating them. Over and over again.

Super Mario 64

Now that I had a N64 I wondered what other games to play on it, so of course I got the N64 killer app: Super Mario 64.

Now before then I hadn’t actually played a Mario game before but this game was great. I remember just wandering around the castle hub-world doing all sorts of acrobatics simply because I could and the analog stick finally let me move around in 3D which made sense. Although I have to admit the camera controls did suck.

Throw a dinosaur four times my size into a bomb? No problem!

I also came to realise that games that Nintendo made really were top quality; they were fun, inventive and imaginative.

The Legend of Zelda: Ocarina of Time

This game came out during Christmas 1998 and was snapped up by just about everyone at the time. I was very lucky to get this on Christmas Day and spent the next two weeks solid becoming immersed in it.

Just like Mario I had never played a Zelda game before so I never experienced the sense of adventure previous games conjured up, and this did feel like a real adventure. It introduced concepts like “locking-on” to your target during combat so you could always see them (something every game afterwards copied) and split the world into two times: current and future, meaning you played as Young Link and Adult Link. And you got to ride a horse!

Looking back I simply remember the variety of the gameplay, the sense of scope (looking into the distance at the volcano or riding across Hyrule field), and the final climatic battle with Ganondorf.

The Legend of Zelda: Majora’s Mask

The sequel to Ocarina of Time, this was like a Zelda game and also not like one at the same time. With the benefit of hindsight I think this had a lot more emotional depth than Ocarina.

The main selling point of this game was that you had to save the world from destruction in just three days (game-time, not real), but of course you couldn’t do everything in that kind of timeframe. This meant that a “Groundhog Day” concept was used: you could relive the same three days over and over again to make progress and also watch the lives of each inhabitant of the world happen over and over again, making notes of key points in time when they would do certain actions. It was a bit harder than Ocarina and there was an added sense of urgency (what with an evil-looking and ever looming moon constantly visible in sky) but still worth playing.

That doesn’t look promising. Image courtesy of

The emotional depth I mentioned though was something I wasn’t expecting; every character had a backstory and it was your job to help them. One little girl’s father had turned into a monster yet she tried to shield him from the world to protect him. A baby had lost his father and brought depression to everyone around him. Probably the most poignant one was having to re-unite a wife and husband – the longest of all the side quests; you eventually did but only just in time, by which point the moon was nearly about to crash into the world – you brought them back together long enough for them both to properly say goodbye to each other. This adventure was not about stopping a singular enemy, rather it was about healing the wounds of the people of the world.

Next Time

There’s still a lot more to go, so stay tuned for the next part.

Wednesday, 14 November 2012

Forcibly Uninstall Apps from Windows 8

I took the plunge a couple of weeks ago and upgraded to Windows 8. It takes some getting used to but overall I’m mostly happy with it. Except when it did something very unusual with the Metro apps (I refuse to use another name since everyone by now knows what “Metro” means).

For those who don’t know when updates are available for Metro apps they appear in the Windows Store, a little number appears on the live tile. One day I found that there were updates available for all the built-in apps such as Mail, Calendar, Bing etc. So I dutifully went to start the update process, found it was taking a while as there were quite a few, and just left it to it.

To be fair I might have messed up my system myself but while this was happening I thought “I don’t need Travel, Sports or anything like that. I’ll just keep the ones I’m interested in”. So I uninstalled the apps I didn’t care about whilst they were still updating.

In retrospect I should have known better – I’m effectively classed as a power user! As a result of my blunder I ended up with nearly all the apps that were updated being wiped from the system, including some of the ones that did matter to me. Here’s the interesting part though: when I went back to the Windows Store to try and install them again I couldn’t. This is what I saw:

So the Store thinks it is already installed. So why when I search for it does this happen?

Something in Windows clearly thinks I have the app installed. Now in the past I would have known to check certain folders like Program Files or delve into the registry to see if some sort of metadata was lying around, but Windows 8 changes things up a bit: these Metro apps seem completely self-contained and sitting in the WindowsApps folder which is quite secure and doesn’t even let me read it by default. So how can I remove these hidden settings to get Windows to play nice again?

Fortunately after some Googling I found the answer on this forum which I will explain in detail below. In this example I’m going to re-install the Bing app despite the Store telling me that I already have it.

First you need to open up an elevated PowerShell console. Simply:

  1. Press the Win-key to get you to the Start Screen.
  2. Start typing “Powershell”
  3. Right-click on “Windows PowerShell” and click on “Run as administrator” in the menu that appears at the bottom of the screen.

Now you can run this cmdlet to see what Metro apps Windows considers to be installed:

Get-AppxPackage -allusers

This should give you a list like in the image below:

Now that you’ve got the details of the app you can run another cmdlet to remove it. In my case I did this:

Remove-AppxPackage Microsoft.Bing_1.5.1.251_x64__8wekyb3d8bbwe

You’ll notice that you have to use the PackageFullName value that is provided in order for the cmdlet to work.

Once that was done I went back to the Windows Store and checked that it worked:

In my case I simply repeated these steps until I had cleared up my mess and managed to get everything I wanted back.

I think it’s quite good that there is actually a way to uninstall something in an automated fashion. Regardless, this trick got me out of a hole so I’m sure someone else might benefit from re-learning Windows tricks like I’m doing.

Thursday, 8 November 2012

The Urge to Rewrite Code

I recently read a fantastic article on Ars Technica about the inner workings of the new WinRT technology powering Windows 8. It’s a lengthy article but it explains how we got all the way from Win16 to Win32 to OLE to COM to .NET and eventually to WinRT and is well worth a read if you’re interested in Windows development and its history.

What got me thinking though is that it gave me a snapshot into Microsoft’s process of keeping Windows and its technology up-to-date yet also be able to support a vast legacy of applications that have been running sometimes for the past 20 – 30 years. What you realise is that to support that legacy and keep pushing forwards it basically built everything on top of one another. If you were to start from the latest WinRT API’s and strip away all the layers of strata you would eventually hit the ancient Win32 API and kernel, the very same system that has been powering Windows since Windows NT. The same can be said for .NET and COM; it doesn’t matter how new or fancy the latest technology trend is, eventually they all become nicer wrappers over what came previously.

This is interesting to me because I’ll bet every single developer out there has had one thought cross their minds at some point in their career: “I need to rewrite this”. Maybe you have a codebase written from another era or you’ve inherited code that looks like a monkey tap danced on a keyboard, sometimes we all fall prey to thinking that we need to invest time in rewriting huge chunks of code (or entire systems) because, obviously, we can do a better job and that the end result will be “better”.

This is dangerous!

I can speak from experience plus cite any number of references saying that taking on a rewrite of a codebase, especially an enterprise/commercial one, is just a bad idea. What you are effectively suggesting is to take a (mostly) working system, spend 6 – 12 months (maybe more) ditching it and starting all over again, and end up with the exact same system you started with plus countless additional bugs you’ve introduced due to human error and/or lack of domain knowledge. Also, all that time you wasted rewriting code meant that you couldn’t do any current development work, meaning your competitors have now charged ahead of you with brand new features that you will never be able to keep up with.

All of this so that your code looks better, a matter that no-one else but you cares about.

Now I am being a bit extreme here. Of course there are times when you have to rewrite something in order to progress forward. Maybe your code is so stuck in the dark ages that adding new features becomes increasingly time consuming or complex, maybe impossible. Yet I’ve learned over the years that you simply cannot ditch what you already have; your customers don’t give two hoots how you managed to kludge together their feature, the fact remains that it was done and it works so breaking it now is not an option.

So what can be done to refactor code effectively? Below are some ideas that I thought of and some of which I even try and implement myself.

Do Nothing

By far the simplest strategy as this requires no work at all! Simply learn to live with your codebase, quality be damned.  If you can overcome your initial feelings of revulsion at the spaghetti code you see daily and just accept it for what it is you might overcome some hurdles.

Of course doing nothing also means making no improvements so it’s quite a trade-off, but as the old saying goes “if it ain’t broke, don’t fix it”.

The Big Bang Strategy

Image courtesy of

Or the polar opposite of doing nothing is doing everything in one go, but as I’ve already said this is very extreme and hardly ever needed as there are better ways of improving your codebase without greatly affecting anything else.

The Microsoft/Onion Strategy

What I’ve seen Microsoft tend to do is build layers upon all their existing technologies so that the next layer up has a better API than the one below and each new layer will handle the fiddly, lower level details so you don’t have to.

For example, consider when .NET first came into existence which introduced Windows Forms. This was meant to replicate the drag-and-drop style of development that Visual Basic programmers have long been used to. But do you think that all the framework classes designed to handle windows and controls were written from scratch for a brand new, untested technology? No, Windows Forms was simply an easier to use wrapper over the interop’ed Win32 code because it already worked; why re-invent the wheel?

Of course once you’ve introduced these layers and made sure they are working effectively you could start to clean up the lower layers or possibly even remove and replace them so that you don’t need so much API coverage; that is assuming of course you can remove all the dependencies on low level code.

The Side-by-Side Strategy

This is a refactoring strategy I tend to use myself. Let’s say you have a feature that, for whatever reason, you are going to rewrite. What I do is actually never touch the old code and instead create a separate layer of classes alongside the existing code to replicate the same functionality but written differently; usually I separate these classes with appropriate namespaces.

Now I can work on the new code whilst the old code can still be deployed if necessary and also not affect my other team member’s builds. Eventually I will have fleshed out the rewritten code enough for calling sites to start using the new code which will then phase out the old. Once all references to the old code have been removed, you can safely delete the old code from your codebase. This might take quite a while to achieve fully but it is certainly a lot safer than starting from a blank canvas with nothing to show for a long time.

I also use this strategy with the ObsoleteAttribute to make it clear that code is old and should no longer be used; it also helps find all the references to the old code by giving me compiler warnings that I can work my way through.

The Inheritance Strategy

As an alternative to having old and new code side-by-side, you could also implement it top-to-bottom within an inheritance hierarchy. The new code would be contained in a base class while the old code would derive from the new and still keep it’s existing API. This means that legacy code could in theory be passed into functions which require the new class and it would still work.


There are many alternatives to refactoring code in one large chunk – I’m sure there are also many other strategies thought up by people far cleverer and more experienced by me. Essentially I have learned from my career that the “big bang” approach never works out well and a more long-term, slower strategy usually gives the best results.

Saturday, 13 October 2012

TypeScript: JavaScript’s Equivalent to C++

I’ve never been partial to JavaScript. I’ve no doubt that in the right hands you can do some truly amazing things with it, like:

But I don’t use any of that for my day job; I simply use just enough JavaScript to get my website to work, which means I effectively treat it like a “glue” language and therefore don’t have the same respect for it like many other web developers would. I have considered trying to put aside any misconceptions I have and knuckling down to learn how I can use it effectively and make beautiful, maintainable code but at the end of the day for me it always boils down to the following problems:

  1. Thanks to it’s dynamic nature you can do anything you want to your objects. This makes it both a blessing and a curse. A blessing in that it gives you an enormous amount of power if you want it; a curse in that if you make a stupid mistake then it will merrily fail and do nothing about it – no critical errors or exceptions, just carry straight on as if nothing happened.
  2. Although not limited to JavaScript, having a dynamic language means I cannot inspect the code with IntelliSense/autocompletion, or at least nowhere near as well as a static language. I will freely admit that Visual Studio’s IntelliSense has spoiled me rotten and I simply cannot live without it now; the ability to see what functions/properties exist in a type as I am writing my code is a godsend. With JavaScript though I am forced to remember APIs or look up documentation for the DOM to remember how to do even the simplest task.
  3. JavaScript was designed to be just a scripting language yet it has been proven that with a bit of trickery you can emulate other language features, such as classes and inheritence. But for someone who does not write JavaScript every day trying to understand these concepts is perplexing. For example, did you know that there are three different ways to define a JavaScript class? How am I to know which is the correct method to use? And when looking at other people’s code does this mean I then have to do some mental juggling to identify when someone uses a different method than me?
  4. Because it is a scripting language and not really designed for application-level code there are a number of quirks that you have to get your head around, such as lack of modular structure and having both a null and undefined concept. For an experienced JavaScript developer this must be second nature but to me this could result in hours of wasted time trying to understand why my code isn’t working when it looks like it should.

But maybe my biggest problem with JavaScript is that even if I or anyone else doesn’t like it, tough! JavaScript has the monopoly on client-side web programming and there is simply no other alternative. To paraphase the famous Henry Ford quote, you can use any client-side scripting language you want, as long as it’s JavaScript.

Enter TypeScript

Now my intention for this post is not for it to be a rant against JavaScript; many developers love it, my language of choice is C#, to each their own. Rather it is about the recent news and release of TypeScript from Microsoft. If you haven’t heard, TypeScript is a language designed to add things like type annotations, proper classes and modularisation into JavaScript. I would recommend viewing two videos from Channel 9 to get a proper sense of what it is all about:

  1. Anders Hejlsberg: Introducing TypeScript
  2. Anders Hejlsberg, Steve Lucco and Luke Hoban: Inside TypeScript

What makes this interesting to me though is that it is not a language like CoffeeScript, which has it’s own syntax to learn that then compiles into JavaScript code, thereby making you a step removed from the result. Rather TypeScript starts as JavaScript code which works completely as you would expect, and then you can add such things as type annotations where you think applicable to clarify your intentions, and again it compiles to pure JavaScript. As an analogy, if JavaScript is like C, then TypeScript is like C++; backwards compatible with the existing language (plus all the millions of libraries and frameworks written in it, including probably the most important one) yet it can help you define some core language concepts much easier to make you more productive.

And for all those people who would complain that Microsoft are trying to take over JavaScript or subvert it for their own cause as they may have done in the past, be aware that nearly everything they are adding to the language is actually very closely mapped to the ECMAScript 6 standard which has already defined future features of JavaScript; TypeScript simply brings these features to you now and compiles it all away into JavaScript code that you can run today in any browser, anywhere.

So Why Bother?

So why do we need TypeScript, or CoffeeScript or Dart for that matter? Why can’t we just learn to love JavaScript? I’m not sure why exactly, I cannot put my finger on it, but JavaScript tends to just frustrate me somehow. The other day I had to make a change to a DOM event hander and spent ages trying to understand why the code I wrote – which looked fine to me – just did not work. All I had to go on was running through it over and over again, no tools could help me spell out that something was clearly wrong that I had missed.

And that is actually what TypeScript is for. It isn’t necessarily a language designed to make JavaScript better, it is actually to provide better tooling for JavaScript. Once you have a well defined type system in place, you can build tools which tell you all sorts of things about your code: trying to pass the wrong types of values into a function being an obvious example.

Not only that, but I see this also as a means to help developers new to JavaScript to ease them into some of the complexities of it. Remember how I said there were three ways to define a class? With TypeScript there is just one, which then compiles into a JavaScript equivalent. Having just one means of accomplishing a task a newcomer can then start to learn faster and, if they then want to, delve deeper into what the compiler would then produce to learn more complex JavaScript.

As TypeScript is so new I cannot say yet whether I would actually use it or not; I would need a justified work scenario to consider experimenting with it. But it’s initial premise has definitely intrigued me, because although something has to be written in JavaScript there is nothing to say that a tool can’t help you with the job in hand.

Monday, 24 September 2012

Becoming a Father: One Year On – Part 4

This is the fourth part of my retrospective on my first year as a new father – though I’m starting to lag behind a bit now as my son is now nearly a year and a half old!

To view all the parts, please click on one of the links below:

  1. The Beginning
  2. The First Two Weeks
  3. Getting the Hang of Things

This post focuses on…

The Accident

Looking back now, the first six months of my son’s life were actually quite uneventful. I would hear stories of his little friends picking up bugs and colds but he seemed as healthy as you could hope, which is something to be thankful for. He never had so much as a sniffle. When he was around 7 months old though he would be paying a visit to the hospital.

One morning my wife was carrying him down the stairs when she lost her footing and slipped, sliding on her back down several of them. It left her a bit bruised and for an adult that would kind of be the end of it – just shrug it off, since it wasn’t that serious – but my son was crying uncontrollably. We couldn’t figure out why as we checked him all over and couldn’t see any marks or bruises anywhere, but he simply wasn’t himself. Our initial thought was that the incident was too much of a shock for him, but after about an hour our instincts told us to get him checked out. As this was a Sunday calling the doctor was out of the question, so a trip to A&E it was.

After waiting for hours that afternoon we had several doctors check him out and the overall conclusion was that he was just in shock still. We did consider maybe a broken bone but pressing on any part of him seemed to illicit crying so no-one could tell. In the end we took him home and gave him Calpol to help him sleep through the night.

The next morning while I was at work my wife texted me to say she was taking him to our GP to get him seen again as he still wasn’t right, who then referred him back to the hospital to get an X-ray on his right leg. Around 4pm I got a call saying that he had a hairline fracture on his right femur and would need to be put in traction, which meant of course staying in the hospital. And that was the start of the week-long “holiday” on the children’s ward.

I left work early to run home and pick up a few overnight things for my wife and son and went straight to the hospital to see them. He was on baby morphine to help him with the pain and still was not himself. My wife was now constantly blaming herself for a) inflicting this on him, even though it was an accident that she could not have prevented, and b) not spotting his broken bone sooner, despite not having X-ray vision herself.

I watched him as the nurses put him in traction, basically by lying him in bed and wrapping bandages and splints around both legs then tying them to pulleys hanging above his bed; the aim was to keep the leg straight so that it would heal faster – there is no magic cure for broken bones, you simply have to let your body heal itself. Fortunately it was only a hairline fracture and, given his age and constant body development, he would heal a lot faster than an adult (he was well on the mend in a week, whereas it might take me months to heal the same wound).

I stayed with them for a few hours before leaving them for the night; my wife would be staying with him around the clock whilst I would still have to go to work each day. That night was actually the first time ever that I was separated from my son; as I walked past his bedroom at home that night I looked into it and it was empty for the first time, which made me feel more alone that I expected.

The Sleepover

So far it is all sounding doom and gloom, isn’t it? Well let me reassure you by showing you what my son looked like for the rest of the week.

As you can see after his first night in hospital he was pretty happy with life. And wouldn’t you be when you have toys surrounding you in your bed and CBeebies on tap? If anything it was harder on my wife, which I will get to.

As I still had to do my day job I was responsible for ferrying items to and from home for them and visiting them first thing in the morning before work and straight after work before going home. I would then get the luxury of sleeping at home, ready to start the routine all over again.

My wife on the other hand stayed with our son 24/7. He got a room on the ward meaning my wife could sleep on a sofa-bed with him. For five days in a row she lived on the ward with him. By the fourth day she was almost at breaking point and when the weekend finally came so I could take over she vomited on the way home through sheer exhaustion. When I did my weekend shift straight after work on the Friday night I could finally understand what it was like; the ward was never truly quiet, always having staff roaming around and coming in to do observations on my son, day and night. There was even another child on the ward who sounded a lot worse than our son; I would periodically hear uncontrollable screaming from down the corridor, so I don’t know how that child’s parents managed to survive.

But we did it for him because as parents that is simply our job; to put him before ourselves. And he seemed to be having a whale of a time, getting visits from his grandparents regularly and only getting bored now and again. Let’s face it, he could have been a lot worse.

Coming Home

After just over a week (and my wife having a weekend of sleeping at home while I took over) he finally got discharged. He was also getting sores from the bandages on his legs so had to come out of traction anyway but his leg had almost healed completely, we just needed to be careful how we positioned it and keep him on his back. My wife took him for check-up X-rays and a month later he was given the all clear.

In fact now we forget that it ever happened. He is so active and runs around all over the place that it is hard to imagine that his leg was ever broken; people even now ask us how his leg is and we have to think for a second as to what they mean.

Plus, as the doctor at the hospital pointed out to me, it is safe to assume that this will be the first of many accidents he may experience in his lifetime as kids will always get into scrapes in their continual quest to have fun.

Next time: One Year Old…

Friday, 14 September 2012

Testing Framework Review:

In a previous post I reviewed NUnit. For my last post in this series I will focus on is a newer open source framework that is gaining some traction. From the website on CodePlex: is a unit testing tool for the .NET Framework. Written by the original inventor of NUnit, is the latest technology for unit testing C#, F#, VB.NET and other .NET languages. Works with ReSharper, CodeRush, and TestDriven.NET. is a developer testing framework, built to support Test Driven Development, with a design goal of extreme simplicity and alignment with framework features. It is compatible with .NET Framework 2.0 and later, and offers several runners: console, GUI, MSBuild, and Visual Studio integration via, CodeRush Test Runner and Resharper. It also offers test project integration for ASP.NET MVC. is even used internally by some high profile Microsoft projects such as:

Integration is a separate project meaning that direct Visual Studio integration support is not provided. However Visual Studio 2012 will allow different frameworks apart from MSTest to be used as the primary unit testing framework – this includes TFS builds too.

In the meantime, the following steps are required:

Download from NuGet

NuGet provides three packages for

Adding these packages to a Visual Studio project is very simple as NuGet will automatically download the latest versions and insert the correct project references required.

Project Items and Snippets

Unlike MSTest which provides project items and snippets with the IDE, does not provide any by default. However these items are not difficult to create yourself if required and the CodePlex project page even explains how to create snippets for


Although some initial setup is required one possible benefit is that is a standalone framework – it can be run anywhere without requiring installation, simply by copying the correct files.

Team Build

TFS 2012 will be able to use the same Unit Test plugin model that Visual Studio 2012 uses meaning in the future it will be a lot easier to integrate into the Team Build process.

Until then though it is possible to use within Team Build but only via a custom build activity and translating the XML output into MSTest results. This webpage explains how it is possible to do it, though the process looks quite longwinded to me.

Writing Tests

Tests are written like this in

   1: using System;
   2: using System.Collections.Generic;
   3: using System.Linq;
   4: using System.Text;
   5: using Xunit;
   6: using Xunit.Extensions;
   8: namespace SampleCode.xUnit
   9: {
  10:     // Classes do not require attributes, does not care
  11:     public class CalculatorTests
  12:     {
  13:         // A "fact" is a test without any parameters    
  14:         [Fact]
  15:         public void Add_AddOneAndTwo_ReturnsThree()
  16:         {
  17:             var result = Calculator.Add(1, 2);
  19:             // Many asserts are provided by default, API style is simple and concise
  20:             Assert.Equal(3, result);
  21:         }
  22:     }
  23: }

There are a much wider variety of assertions provided by by default compared to MSTest. A full list can be found here.

Data Driven Tests

Data driven tests in are known as theories. They are test methods that have parameters and can accept input from a number of sources. A theory looks like this:

   1: [Theory]
   2: [InlineData(1, 2, 3)]
   3: [InlineData(3, 4, 7)]
   4: [InlineData(30, 10, 40)]
   5: public void Add_AddDataValues_ReturnsExpectedResult(int first, int second, int expected)
   6: {
   7:     var actualResult = Calculator.Add(first, second);
   9:     Assert.Equal(expected, actualResult);
  10: }

Out of the box can accept input from the following sources:

  • Inline data
  • Property data
  • Excel spreadsheet
  • OleDb connection
  • SQL Server database

Running Tests

Until Visual Studio 2012 comes out tests cannot be run directly via the IDE but there are a number of other options available.

The console runner is the most basic test runner available and works from the command line.

The GUI runner is a simple standalone application with it’s own user interface and is not as full featured as the NUnit GUI runner but is capable enough. Instead of a tree view like the NUnit GUI runner this test runner presents a flat list of tests but they can be filtered down by search terms, assembly and/or trait values.

One aspect that is different from the NUnit GUI runner is that although this runner will detect and reload test assemblies when rebuilt it will not automatically run the selected tests again, unlike NUnit.


Unlike MSTest and NUnit, provides it’s own custom MSBuild task which allows direct integration into the build process. A project file can then use it similar to this:

   1: <UsingTask 
   2:     AssemblyFile="..\packages\xunit.1.9.1\lib\net20\xunit.runner.msbuild.dll" 
   3:     TaskName="Xunit.Runner.MSBuild.xunit" />
   4: <Target Name="AfterBuild">
   5:     <xunit Assembly="$(TargetPath)" />
   6: </Target>

Build output then looks similar to the following:

------ Build started: Project: SampleCode, Configuration: Debug Any CPU ------
SampleCode -> C:\TalentQ\Experiments\UnitTestAnalysis\SampleCode\bin\Debug\SampleCode.dll
------ Build started: Project: SampleCode.xUnit, Configuration: Debug Any CPU ------
SampleCode.xUnit -> C:\TalentQ\Experiments\UnitTestAnalysis\SampleCode.xUnit\bin\Debug\SampleCode.xUnit.dll MSBuild runner (32-bit .NET 4.0.30319.269)
xunit.dll: Version
Test assembly: C:\TalentQ\Experiments\UnitTestAnalysis\SampleCode.xUnit\bin\Debug\SampleCode.xUnit.dll
Tests: 4, Failures: 0, Skipped: 0, Time: 0.041 seconds
========== Build: 2 succeeded or up-to-date, 0 failed, 0 skipped ==========

The MSBuild task can be configured like the console runner meaning that XML/HTML results can also be saved too. See the documentation for more details.

Another useful thing is that, because it integrates into MSBuild, any failed tests will appear as errors in the IDE error list so by definition this would make it a failed build. The only slight oddity though is that, in its current form (version 1.9.1), double-clicking the errors in the error list does not take you to the correct source code as line numbers given are referring to the project file not source files.

Additional Runners also provides these test runners as standard:


Performance of running tests seems to be faster than MSTest, even with a significant number of tests to execute.


Apart from the output represented by various test runners, an XML report can be produced by either the console or MSBuild runner. Once in an XML format, this can then be transformed into another format, e.g. a HTML file to make it human readable or a *.trx (MSTest) output file so that Visual Studio can understand it.

Fortunately is able to do this transformation for you as long as you provide the XSLT stylesheet to use. Out of the box the following stylesheets are provided:

  • HTML – transforms the XML report into a HTML, human-readable report
  • NUnit – transforms the XML report into the same format that NUnit uses


This in my opinion is where falters. Because this is a newer framework documentation is thin on the ground, especially when compared to NUnit. Usually though the features are simple enough to figure out and there is sample code provided in the CodePlex repository, but you may also have to do some searching around on the internet for an explanation of some things.


One of the big selling points of is its extensibility which is far greater than either MSTest or NUnit. Some examples are:

Report Transformations

By default the console runner can provide XML, HTML or NUnit report output, but this is actually configurable by defining further command line switches mapped to a suitable XSLT stylesheet to transform in into another format (e.g. *.trx (MSTest) format). Extensions

The entire extensions assembly is a perfect example of its extensibility. For instance [Theory] methods are actually specialised [Fact] methods that do some additional work.

More Assertions

If there are not enough assertion functions for your liking you can implement more by extending the Assertions class rather than having to write your own wrappers for it. For example:

   1: public static class MyAssertions 
   2: {
   3:     // By using extension methods you can add more assertions
   4:     public static void Test(this Assertions assert)
   5:     { 
   6:         Assert.True(true);
   7:     }
   8: }
  10: // By deriving from TestClass a modifiable Assert class becomes available 
  11: public class CalculatorTests : TestClass 
  12: {
  13:     [Fact]
  14:     public void CustomAssert()
  15:     {
  16:         // This is our own assertion method
  17:         Assert.Test();
  18:     }
  19: }

My Opinion

This is a tricky one. Whereas I felt that NUnit was miles ahead of MSTest, the difference between NUnit and is a lot smaller. To be fair you could pick either one and be extremely productive so it all comes down to nit-picking.

Both NUnit and have their benefits and each have a few disadvantages but in the end, after much careful thought, I’ve decided to use as my primary test framework for the following reasons:

  1. I like the fact that XSLT stylesheets are provided with the framework so I don’t have to define my own HTML report format based on the XML output. And if I wanted to change the layout of the report I would at least have something to modify as a base.
  2. Overall the MSBuild task is a great way of integrating into the build process. Whereas MSTest and NUnit could be run as a task the fact was that they were just starting a new process; the only way you would know your tests had failed was by checking the exit code of the test runner, which wouldn’t tell you anything useful.
  3. The extensibility of framework is a real plus point. I haven’t needed to extend any features yet – consider that a testament to the basics it got right – but it’s nice to know that it is there if needed.
  4. Finally I just feel that it has a lot of potential. It may have some niggles to iron out but I feel confident that they will be.

All of the above are very minor points; like I said I could have just as easily went with NUnit, but just edged ahead in my opinion.