What makes a good unit test? That was the question posed recently on the Pragmatic Programmer's mailing list. Reader David Bock related a story about a test that suddenly started failing during a late-night coding session:
"I started looking through the recent changes I had made, wondering why my changes would have broken it. After a few bleary-eyed moments, the test mysteriously started working again. I looked at my clock: it was 12:03 am.
"I looked at the test: it was testing the method with a date/time of the time the test was run (basically, just new Date();). I looked at the method: it had an off by one error that made the method return noon TODAY if you asked it between 11:00 and 11:59:59 pm."
David went on to list three morals to the story--guidelines to use when writing good tests. Other developers chipped in with their ideas but most were variations on these three themes:
1. Make your unit tests repeatable in every aspect. For example, a test that uses the current date/time is testing with a random value every time you execute it. And if a test can't run predictably at the same time as any of your other unit tests, it indicates some environmental coupling that should be eliminated.
2. Test your boundary conditions. For example, test the minute and second before and after midnight, noon, the day before a leap day, etc.. Before I was introduced to unit tests I would often try my code interactively to see if it worked with any cases I could think of. That's ok as far as it goes, but now I can capture all that in unit tests and observe patterns and find holes that I'm not testing. Not to mention that unit tests can be run over and over after every code change.
3. Always keep your tests passing at 100%. There are two main reasons you may be tempted not to do this. The first reason is that your tests may be taking too long. "If it takes more than a minute to run all your unit tests," another reader added, "you will be subtly discouraged from running them."
The second reason for less than 100% passage is what some call the "broken window syndrome". David writes:
"We had about 15 tests that were failing... first it was going to be like that for 'a week at most', then it became two, and it dragged on a little bit. People got used to seeing ~15 tests fail, so when it would go up to 17 and down to 14, no one would really pay attention."
It's easy to fall into this trap without realizing it. In David's case the continuous build had actually caught the test failing earlier but it was lost in the noise. So if you see one broken window, er... test, you should stop what you're doing and fix it immediately. If that's not possible then comment it out and come back to it later. Make problems stand out.
Of course, there's a lot more to writing good tests than just these three rules, but by following them you'll be off to a good start. And maybe you can save yourself some of that midnight oil.