Friday, 10 August 2012

What I've Learned About Test Driven Development

At the end of April this year me and my colleagues prepared and deployed a big website/system update. A lot of things would change and get better and despite our rather lax testing procedures - testing being not very formal and done by us developers - we felt we had tested it as much as we possibly could and were very happy with what we did. We deployed it over a weekend with scheduled downtime, did our updates along with some server enhancements, and brought the site back up...

...Starting on the following Monday and continuing for the next two weeks we did nothing but patch holes with endless hotfixes and grovel to our customers promising that we will fix all the bugs they were reporting non-stop as soon as possible. One of our web services did not start up again for an entire weekend and our biggest client wanted to know why they couldn't access our systems. Users were complaining that the website kept kicking them out at random times which we eventually found out was being caused by a StackOverflow exception occurring every 30 minutes and crashing the IIS web process. It got to the point where the CEO got involved and at the eventual end of this nightmare my manager had to go and visit some key clients to personally assure them that all was well and this would never happen again.

We felt gutted. By our standards this was the most tested release we had ever done and yet it ended up being the worst one in our company's history.

This is all background to the real purpose of this post. When the dust settled we all sat down and discussed what went wrong and why and one of the key points we learned (though not the only one) was that our testing strategy was dire. In the absence of a real software tester (though we have one now) it was down to the people writing the code to do testing, and it tended to be a quick couple of run-throughs in certain areas to make sure things worked as we expected. This was not good enough and I simply felt that I couldn't trust our own system anymore.

A friend of mine talked to me about the benefits of Test Driven Development (TDD) a couple of weeks earlier and at this point I put it towards my team. I felt that, as well as having a proper software tester, we as developers needed to think about testing from the very start. In essence, we had to build quality into our code.

And so began my long journey into a development practice that I had never tried before and yet now I feel makes so much sense to me. Last week marked the very first unit and functional tests ever entered into our codebase and although they only cover a tiny fraction of our entire code I feel much more comfortable that that code works as it should, even before it goes to our tester to evaluate.

I will not go into the entire TDD process - there are enough resources on this already - but I thought as this was a learning process for me I would share with you what I've learned in the past few months.

Roy Osherove to the Rescue!

I did a lot of research into TDD, trawling the internet and reading through a book a colleague lent me but I still had many questions and things I was not sure about. What is the difference between a unit test and an integration test? What if your code has to read from a database or a file? What is a stub and what is a mock?

Then I listened to this Hanselminutes podcast of Scott Hanselman talking to Roy Osherove, author of The Art of Unit Testing, and all my questions were answered in 30 minutes. If anyone else is having trouble understanding unit testing and TDD I would very much recommend listening to this as it helped me a lot.

No-One Likes MSTest

Before you can write your tests though you need a unit testing framework to help you write your them, but which one do you choose?

Initially I started using MSTest as that came integrated with Visual Studio and I could experiment quickly with learning the TDD process, but in the end I saw some flaws with it - as do many others

  1. The benefit of having MSTest integral to Visual Studio can also be it's downfall as you simply cannot run it standalone.
  2. MSTest simply provided the bare minimum of features to make it worthy of calling it a unit testing framework. I learned that other unit testing frameworks were more capable and had far better features than MSTest, including assertion of exceptions and data-driven tests.
  3. The last problem I had was more of a personal one in that the test runner window didn't seem to make it that clear to me that all the tests had passed or failed. Compare the screenshots below of the MSTest and the NUnit test runner:
An example of the MSTest runner. Notice that tiny little tick at the top of the window? That tells me that all the tests passed.

Courtesy of Robust Haven (
Now compare that to the NUnit test runner. See that great big green line? Guess what that means?

Courtesy of litemedia (

So I tried out NUnit which I liked very much and seemed very dependable, and most recently which I finally settled on.

What I learned from this though is that, once you take MSTest out of the equation, any test framework is usually good enough and it simply comes down to personal preference. I went with because it provided several things I found useful straight out-of-the-box such as an MSBuild task to add to your project builds - if a test fails then the build fails - and it provides a XML stylesheet by default to create HTML reports rather than having to make one myself, a la NUnit.

Doing it Backwards

The very first stumbling block was that I had to mentally switch around my working style to effectively be back to front. Write tests before code? How was that meant to work, surely you need something there to try out? But as long as you follow the Red, Green, Refactor process it does actually work.

Even so, for a beginner like me it required a very large mental shift. For my very first unit test I must have sat there for a good half-hour thinking "How do I write this test? Where do I start?" Once you get past the first test - usually making sure something works - then the next test seems a bit more obvious: "What if I pass in the wrong input?" This could produce quite a few more unit tests so now the ball starts rolling. And once the first code function is tested you start to see a pattern so the next batch of tests seem a bit easier to write, and so on.

As long as you can start writing some unit tests to begin with, the process becomes a lot more intuitive up to the point where it seems like second nature and the way you were working before seems so arcane now.

Designing Your API

The other thing that struck me is that, despite the word "test" in Test Driven Development, initially it feels more like I'm writing a very detailed specification for all my objects. As I am building up each test it feels like I am defining what the code should be doing which to my mind is almost like a list of requirements - function A will allow a string to be passed in, but not an empty string or one that has more than 100 characters etc.

TDD has also made me think a lot more about object dependencies and code design. When I sat down and looked at my codebase as it stands now, many objects in the business logic layer call straight into the data-access layer and database to do something and these objects simply assume that that will just work - there has to be a database present to do anything in my system. For the real world that makes perfect sense as a system without a database is pretty much useless, but in a testing scenario it cannot be assumed that a database is there and for unit testing nothing should be touching external state.

This led me to wonder how I could actually test anything if all of my code is so inter-dependent on each other (or tightly coupled). The answer for TDD is to make your code loosely coupled instead which impacts your code design and makes you think a lot more about patterns and practices like Dependency Injection. As a result I'm starting to feel a lot happier with the overall design of my code as it has started to resemble smaller, Lego-sized blocks which can be combined together into bigger systems.

Start From Scratch

Following on from the previous section if you already have a large codebase and start to introduce TDD into it, like me you might find it a bit daunting. Where do you start? Do you retroactively add unit tests into existing code? And what about dependency problems you have?

So I learned that the best way to learn about TDD and introduce it into your working practices is to start on something brand new, a clean slate. That way you can build up your tests and knowledge as you go. And if you can't start from the very beginning? Then pick a new feature that you're due to develop and start thinking about how you would implement it in a TDD fashion. If only one part of your codebase has been written with testability in mind then that is better than nothing.

Test As Much As You Can

My final thought is that initially I thought that you would have to aim for 100% test coverage to make you feel that you had done a good job. While it is a target to aim for, you don't have to achieve it, or maybe it is just impossible to reach that goal for many reasons.

The very first feature I implemented into our product that was written with TDD in mind only has about 30 unit tests and that doesn't even cover the entire feature end-to-end, the reason being that existing code has dependencies that I cannot extricate just yet. But the new code I have written, that was unit tested. In all, the total test coverage is a tiny percentage but at least I can build upon it over time.

All in all I now think in a TDD style and find it difficult when I have to revert back to the old-fashioned cod-then-test mentality. It's only been a few months since I started investigating this process as a possibility but I'm glad that I did as I feel one step closer to being a better programmer.

No comments:

Post a Comment