Test Driven Development is Better Development

Adam Lammiman
11 min readSep 4, 2022
Image by Gerd Altmann from Pixabay

Test Driven Development or TDD is a development strategy or style, at it’s most basic it is summarised as ‘write a test and then write the code to make that test pass’. There are nuances and complexity that add layers but at it’s heart TDD is that one simple statement.

Its invention is often attributed to Kent Beck in the late 90’s but he himself has said that it is something he learned from earlier sources and has re-popularised.

What this means is that as a concept writing your tests before your code has been around for at least 23 years, but probably a lot longer than that. It’s not a new fangled trend or fad technique, it’s a tried and tested form of development.

Offshoots of this style like Behaviour Driven Development arose in the early 2000’s, so again close to 20 years.

The fact these techniques have lasted, especially in today’s world of constant technology churn, where I cough and a new javascript framework appears, is testament to their effectiveness and validity. Yet it still surprises that there appears to be large chunks of the industry that just don’t get it, either that or they just all turn up to my interviews.

If I see TDD written on a CV the question I always ask is ‘do you write your tests first?’ and pretty much every time the answer is either a slightly guilty ‘I’ve just been caught out look’ and an admission that yes they know they should but no they don’t, or I’d like to but can’t due to resistance from other people on the team, or the business not supporting them etc. etc. Or a number of excuses.

The fact that it graces every CV I’ve ever seen, and yet so many people feel they can say they do TDD and not expect to be challenged proves to me that is a skill the software industry simultaneously craves and yet fails to either teach or understand.

Writing tests after writing code is not TDD. Writing a couple of tests and then going off on wild tangents for a couple of hours before trying to backfill the gaps afterwards is not TDD. Using test coverage reports to maintain ‘99% test coverage’ is not TDD and using Specflow does not equate to BDD. So stop pretending it is.

I am not black and white about many things, I will always be interested in different approaches and different opinions than my own, but in this I am utterly dogmatic. So I apologise if this is a one sided opinion driven piece. However in this instance I’m right.

Test driven code is better code.

Period, I have worked my way from no testing, to testing after the fact to proper TDD and BDD. I have worked with many developers who don’t practice it and a few developers who do it well and I am confident in saying that the TDD and BDD approach is superior, the code is better, much more reliable, easier to read and understand and less prone to bugs.

I am also pretty confident when I say that large numbers of bugs are written by developers because they have missed tests and their way of working practically ensures they will miss tests and thus create bugs.

If someone submits a technical test I can spot tests after the fact a mile off. From their structure, from their syntax and most importantly because of the gaps. When this is transferred to production code the result is messy code, difficult to understand and maintain tests and lots of holes for bugs to slip through.

I think part of the problem is that when people are comparing TDD to non TDD practices they think they’re comparing apples to apples, when they’re really comparing apples to a horse. They think that by adding checks and balances like coverage reports they are creating a process that though different is fundamentally the same, but that just demonstrates the lack of understanding.

This is part of the reason I get annoyed about this. ‘Yeah I practiced TDD for a while by I found I had issues with it so I tried this instead’ is a conversation I have never had, the argument is never coming from a point of actual comparison (because if they could properly compare they wouldn’t continue what they are doing).

Firstly Test Driven Development is not about test coverage. The proper practice of TDD fundamentally changes the way you think about code, the way you approach problems as well as the way you think about testing. Test coverage is really a minor side benefit.

Now that may seem slightly contradictory, seeing as TDD is all about the tests (it’s even in the name!) but it’s the sort of insight you gain only by really trying to rigorously apply the principles of TDD.

Often developers who write code and then write tests after use things like test coverage reports to determine if they have added the tests they need to cover their code and think this is just as effective as writing the tests first. However this is at best a sticking plaster and has the danger of encouraging poor behaviour.

It’s the classic be careful what you measure. Just because you have a report with 99.9% coverage doesn’t mean the tests are actually covering anything. I don’t think this is a surprise, again when I challenge people about test coverage almost everyone I have spoken too can recount an instance when reports have been abused in this way and I have certainly worked with people who chased that stat but wrote useless tests to achieve it.

But there’s a deeper issue in the mindset this approach engenders. If the tests are seen as separate additions, strapped around the code as a requirement for safety and reliability (they may even be farmed out to separate people), it’s not about the test it’s about the result.

Reliance on test coverage reports indicate a lack of confidence, I can’t be sure I’ve tested everything so I need something else to tell me it’s all OK. This lack of confidence is there because the workflow promotes this, you can never be sure you covered every behaviour with a test because you’re constantly having to backfill against multiple bits of behaviour. It also forces your thinking process into testing equals lines of code covered, not an organic growth of behaviour.

The technical problem you often see with this approach are large tests that test too many things at once (because you’re doing the tests at the end and it’s quicker to write a test that covers all that code than 5 for each of the behaviours you just added), convoluted test setup because the code wasn’t written with the tests in mind and tests relying on shared magic data where the tests work because the data happens to be there and gaps, lots and lots of gaps.

This lack of confidence then leaks out into long running system and integration tests to catch the things that slip through the net. They’ll often arise from some kind of major failure and feel like an attempt to fix a process that, really, most understand isn’t actually working.

Instead of addressing the root cause, the low level disquiet finds expression in these tests. The baseline is separation and fear assuaged with the comfort blanket of a report and some system tests to make the business feel better.

To give a concrete example, I was working with a couple of younger developers the other day who were working through a new bit of behaviour on a simple story that had been given to them as a training exercise. There was a display which showed a time in minutes when under a hour and hours and minutes if over the hour.

They had one test to check for minutes, had added that code but then had gone on and added the behaviour to display hours and minutes, they had done that without adding or updating the tests first. Everything was working as it should, it looked good but only one bit of behaviour was covered with that first test.

I’d like to say this was the folly of youth, but I have seen this pattern over and over again with some (on paper) really experienced developers. Test one thing but write the code for 3 things.

There was a contractor I worked with a while back, he was a nice guy and experienced, we had a lot of long discussions about TDD but he was too comfortable in his approach and would not change. He also introduced a ton of bugs because his testing was poor for exactly the same reasons highlighted above.

This is an important point. He often wasn’t introducing the bugs at the point of the story he was working on but he was increasing the the possibility of bugs, when the tests have gaps or the tests cover too many things at once the next time you refactor that code the likelihood of a bug creeping through increases and the more gaps there are the more likely that becomes.

In contrast when tests are written first they intimately linked with the code that passes them, they’re entwined, unified and always relevant, it’s a symbiotic relationship not a policing relationship. By writing the test you are asking the question ‘what should this do next?’ and then answering it with the next bit of code. As the tests evolve so does the code in a synergistic call and response. By limiting the input to just the next test you are focussed in on each behaviour. Test coverage grows from this union as a consequence, you really don’t even have to think about it.

To go back to that example from earlier, as soon as I got the two juniors to start again (yes I made them delete the code first, they love it when I do that. But it’s quicker, cleaner and safer to just clear down and write it again than to try and paste in the gaps) and add the test for hours and minutes the immediately obvious question was what happens if it’s an hour with no minutes? This was completely missed when they’d added the code first and would have immediately resulted in a bug. The act of writing the tests forced them to slow down and consider what they were doing and what was the next step, while at the same time they were documenting each decision they made with each test.

By putting the tests at the forefront they are never an afterthought or an inconvenience they are how you express your thought process as you explore the problem in front of you. And if something stops making sense you can replay your thought process by looking at how your tests evolved.

As a consequence of this you are always thinking about the tests. If I’m adding a feature I look at the tests, if I want to refactor I look at the tests, if I find a bug I look at the tests. As you develop and refine from the tests inwards you start to develop an innate feeling regarding testing and the shape of the code. You can feel where a test needs to be added or needs to be split, you can feel when things start to get too complicated and you need to reign it in, best practice becomes the norm (in order to mock a test you need IOC, in order to write a test that tests only one thing you need SRP).

It gets into your bones, I get a literal physical reaction in the pit of my stomach if I start working on something and I haven’t gone near a test for a bit ‘this feels wrong!’.

Another example, I was working with someone a while back we’d just finished some frontend work and we were just doing some final checks through how it looked in the browser when my partner uncovered a bug. Their first reaction was fix the bug, my first reaction was alarm bells and why are we green? Going back and reviewing the tests we found a couple that needed refactoring, we altered them got a red test and resolved the bug.

Yes we found a gap, no system is entirely perfect, but because of the focus on the testing for me the problem wasn’t the bug it was the gap, why was it there and what did it mean? This meant that instead of just fixing and moving on we reviewed the tests and made sure that we had improved and resolved them first. This way every pass at the code tightens and improves as opposed to loosens and weakens

You can’t develop this 6th sense without following a test driven approach, the mindset develops directly from that way of working by ingraining the process so much it becomes second nature.

Now not everyone has the luxury of working on green field software, often we’re slogging through old code that may not have been written with tests in mind, every company has some form of legacy it is managing even if it’s only last weeks. This is where I often see the most criticism levelled at TDD, oh it’s OK if you’re writing new code but my code wasn’t written to support test first.

I do understand this, if you’re working on a legacy system and especially if you haven’t had exposure to good TDD practice prior it can be hard to see the wood for the trees and the code you’re working on doesn’t really support your learning. But there is also a whole raft of techniques and documentation to help you move past that, improve systems and bring them under test. So yes it is harder but it’s not a excuse to perpetuate the same pattern forever.

Michael Feathers Working Effectively With Legacy Code is classic that has been around since 2004 (again close to 20 years) and it outlines many techniques for bringing legacy systems under test. There are other patterns like the pleasantly named ‘strangulation pattern’ where you build a new system that slowly cuts off or ‘strangles’ the old system as you move functionality to it (we’ve been using at my present company to port a legacy web app into a newer modern site).

Where there is a will there is a way, legacy code does not prevent you from adopting test first.

If you do come to legacy code with a test first approach there are several benefits. Firstly your tolerance for poor testing will drop through the floor so you are much more likely to do something about it. Secondly because you mind focuses on the tests it’s easier to see where they are lacking and where there maybe ‘seams’ to use a Feathers term in which dependencies can be broken and tests can be added or amended.

By starting to develop that innate feeling of what good looks like you have a mental model of what you’re aiming for, and can then start to develop strategies of how to achieve it. This puts you on an upward cycle of improvement, not a static one of maintenance and reinforcement or a downward spiral of dissolution.

In conclusion, the concept of TDD has been around for a long time, is it perfect? No it’s not. Is it better than any other approach to writing code? Yes it certainly is. So why are you not doing it? If you are writing TDD on your CV but not actually following TDD practice, stop it and actually go and learn to do it properly. You can stop blagging those interviews, kiss those coverage reports goodbye and find the joy of writing better code than you did before.

--

--

Adam Lammiman

Making software is a creative pursuit, not just a technical exercise. Exploring the best techniques for its development.