The Case for Automated Testing in Line of Business Software Development

11 May 2015

Introduction

Software development is hard. Not only does a software developer need to understand the language, framework, environment, tools and patterns and practices of their given stack, a developer also must spend their time learning and understanding the business domain of each application that they write. Software developers also often work on more than one piece of software, some with multiple different customers, and therefore have the requirement to keep all of that knowledge in their head as they context switch on oh so many different variables.

I can't speak for you, dear reader, but every time I switch between software projects, and switch between domains, I feel a little like Homer Simpson, complaining that:

[...] every time I learn something new, it pushes some old stuff out of my brain.

The problem is that the domains and applications that we work on are so complex, and encoded within them are an application's lifetime of bug fixes and corner case logic, that each time we go to make a change to an existing application, we can't possibly fit the entire behavior of the system into our heads.

I find this deeply troubling. How can I expect to do an adequate job as a software developer if I'm not able to fully (to a reasonable degree) understand the system that I'm changing? If I take the time to squeeze as much knowledge as possible about the system into my head, so that I feel that I am capable of doing the best job I possibly can, am I really using my time my employer's time as effectively as possible?

Letting the Computer Do the Things It Is Good At

A good friend is fond of saying

Humans are good at reasoning with uncertainty, computers are good at performing tasks accurately and repeatably. Let the computer do the thing it is good at and the human do the thing they are good at.

I reckon this is a pretty apt description of the problem we software developers face on a daily basis. I'm sure we can all agree that humans are not very good at understanding the entirety of a complex software system. I doubt we're even very good at understanding a simple software system.

If only there were a way to outsource that job to a computer? If only there was a way that we could give the computer a list of behaviors we expect the system to conform to while we're writing it. If only there was a way to get the computer to take that list of expected behavior and get it to assert, whenever we made a change, that the system still conformed to the list of expected behaviors we enumerated when we wrote the system. If we could do that, imagine if there was a way we could then shorten that feedback loop, so that the moment we altered the system in a way that didn't conform to our expected behaviors, we would be alerted to the fact, able to fix the issue and get the system back to a valid state?

I can't speak for you, dear reader, but I reckon that would be a pretty amazing way for a software developer to work.

Automated Testing

I'm sure by now you've got the idea of where I'm going with my facetious what-if scenario. We already have a tool that does exactly what we're talking about, named automated testing. With automated testing, you can outsource the expected behavior of the system to the computer, and the computer can let you know when you're breaking that expected behavior.

With continuous testing tools such as NCrunch (seriously, probably my favorite software development tool out there) and SBT's continuous testing, we can get literally up-to-the-second feedback on the changes we make to a system, as we make it! It's funny, as a developer who has worked with continuous testing tools both at the office and in my personal time, I find the thought of working without them a little too much to bear.

The Case Against Automated Testing

And yet, there seems to still be quite a few holdouts. You'd think, after I, and many others, have evangelized the wonders of automated testing, all developers would be throwing their hands in the air with delight, firing up their IDE of choice and starting down the path to the promised land of Red-Green-Refactor. So why isn't everyone sold?

There are many arguments against automated testing. As you would expect, some of them are persuasive and valid, and others not so much. What I intend to do with the rest of this blog post is to take some of these concerns and do my best to convince you that they can be overcome. That you too can enjoy the benefits of automated testing, regardless of where you work, or the code base that you work on.

Extra Code to Write and Maintain

The words "automated testing" evoke fear in some developers' minds. It evokes an image of extra work and extra maintenance. That seems like a pretty good argument to me; I think we can all agree that writing less code is good for everyone, and yet here I am evangelizing the idea that we should be writing more code about the code we're already writing.

I feel like the issue here is one of definition, and so I'd like to clear that up by offering my own definition of automated testing.

Automated testing is the act of letting the computer do your work for you. That is, it does the verification work that you should be doing anyway.

I'm pretty sure this is my own definition of automated testing, but in case I've read this elsewhere and internalized it (I have read a lot about automated testing), please let me know in the comments and accept my sincere apologies.

The software you write is pretty important, yeah? You generally pride yourself on writing as few bugs as possible right? So therefore, when you write new code you at least execute the lines of code that you wrote, yeah? I might sound a little snarky here, but I would expect any developer working on any software of consequence would have executed the code that they wrote at least once, before it's released into production. It's only professional.

The problem is that doing this manually in practice is actually very difficult. Especially when you are altering someone else's code, and you don't even know what lines of code your change really affects, short of being a human-compiler and trying to figure that out.

This is why I consider the argument of extra code to write and maintain to not hold much weight when discussing the pros and cons of automated testing. While it's true that you end up with more code to maintain, in the long run, with automated tests, this code replaces work that is much more laborious, error prone, and frankly, boring.

It Takes Too Much Time

I'll accept that, when getting started, I took longer to write code with automated testing than without. However, I find it pretty difficult to really argue that it takes me longer these days to write code with tests. As I mentioned above, tests enable me to run every line of my code each time I make a change to the code base (within reason). The amount of time saved debugging, manually testing and freaking out in stressful situations when things become time sensitive, I feel, far outweighs the time spent thinking about and writing tests.

Bad/Slow Tests

Another extension of "it takes too much time" is the argument that, without being an expert on writing tests, a developer can get themselves into a situation where the tests are so poorly written that they take far too long to run and cause bottlenecks in the deployment process. I'm not denying that this can be an issue, but surely we're putting the horse before the cart a little bit here.

I'm a big fan of the Voltaire aphorism:

Perfect is the enemy of good.

Worrying about the quality of the tests written, before they are actually written, might be somewhat wise (though beware YAGNI). However, not testing at all because of the possibility of writing inefficient tests is just going to block you off from any benefits that come from automated testing. I can't help but see this as a symptom of premature optimization (which as we all know, is the root of all evil).

All I can really say to this argument is this: let's wait until we have some automated tests, before we start worrying about making them perfect.

Code That Actively Resists Tests

This is the big one for me, and one that I've definitely experienced before. I truly feel your pain if this is the situation that you're stuck with, but there is light at the end of the tunnel. I'm not going to go into too much depth here, because there is a whole world of information on adding automated testing to legacy systems. Instead, I'm going to insist that you start with this awesome blog post from Erik Dietrich and remember not to let perfect be the enemy of good. Trying to do too much at once when adding automated tests to legacy code can leave you feeling overwhelmed. Take it a piece at a time, and if you can't figure out a way to get some code under test, leave it, and when you come back to it next time, you might have thought of a way to make it work.

Conclusion

I hope I've added to the voices out there that promote automated testing and made at least one developer feel a little bit better about adding tests to their own code. Remember, the best way to start solving any problem is to break it up into manageable pieces. Once you start to get some confidence on small pieces, you'll start to feel more capable of tackling the larger problems. For now, if you agree or disagree (especially if you disagree), please leave a comment and let me know why. More discussion about automated testing can only be a good thing.

Happy testing.

Tags: Testing
Tweet