My pair and I were talking about getting some code coverage results for our project (it has been a while since we have the last report because of various changes in environments and breakages caused by the move to Java 5).
We managed to finally get a report and we were talking about parts of the system that came out tested. We picked a few parts at random and it was interesting to see the different styles of tests that had been written over time. I like to think that my style of tests (at least my current style) tends to be very user-centric, aimed at representing whatever use case(s) I am working on at the time. I like to make sure that the tests I write leave behind the business reason why the code that I wrote actually exists.
In contrast to my own personal style, some of the tests we inspected seemed to actually test the structure of the code that was written instead of just the business rules. Although I was glad to see the code there was actually tested, I find that these sorts of tests (probably closer to white-box testing) are not as useful as those based on business rules (think black box testing) because I find they tend to require more changes and therefore are much more resistant to refactoring. The reason for this is that if I change to code to support a similar but different business rule, can I be confident that all the previous business rules hold as well? Probably not.
I find that some of my tests might actually end up testing a little bit of code a few more times, but I have more confidence in the system that if someone decides to change the implementation, the business will still be getting the same behaviour out of the system. Oh, and I don’t think that code coverage metrics do not add value – they certainly highlight parts of the system that need testing, but like everything else in software development, actually needs just that little bit more judgment and thought to interpret the results.