Best Practice Test Coverage Goals and ToolsThere are programmers who claim to have achieved — at least in one case — 100% Code Coverage during software test.
This however is not real life. Using standard code instrumentation tools, however nice a testsuite you have, chances are that your test produces a coverage much smaller.
Let us study an example: A very good tool to assess test coverage for programs written in Java is Cobertura. Looking up the code coverage achieved by the testsuite for Cobertura itself, we see:
- The test suite testing Covertura produced quite different code coverage for different packages or different classes: The percentage of coverage for packages as well as for classes ranges from 0% to 100% (!).
- Wherever the percentage is considerable less than 100% you are not told how critical this is or how costly it will be to find additional testcases.
To understand why, look at the following example:
Assume that you have an application used to maintain and update data very critical to your business (hence very valuable). The developers of this application — knowing that contaminating your data would be a major desaster — added many assertions to their code to guarantee that buggy logic would be detected, and the program would be stopped, long before data could be destroyed. The test suite they created, so we assume, reported 40% branch coverage (the percentage being so low because many branches were simply leading to the error exits in assertions).
The interesting point is: Just by deleting all assertions you would get a second version of this application program which would be much more dangerous to use (hence less valuable). However, the same test suite would now report e.g. 95% branch coverage.
So you see: The percentage of test coverage actually achieved simply says nothing about how useful or how well testet the code actually is. And, if you force your team to deliver very high branch coverage, they might simply be forced to delete assertions. You would be satisfied and never know that your software now is much more risky to use (!).
And so I always use my own code instrumentation utility:
- I create it to the effect that code representing assertions is simply ignored (and can so be present without reducing my numbers on test coverage).
- The metric I use is Method Exit Coverage (also called Return Door Coverage), and my instrumentation utility could even weigh the different checkpoints (the weight being e.g. the code size of the method in question).
- Then I set myself the goal to achieve (near to) 100% test coverage — which is now a quite reasonable, well defined goal.