After reading the article "Is Code Coverage Irrelevant?" by Ron Jeffries I feel the necessity to accept the challenge and tell what I really think about code coverage.
I've being thinking for some time about writting this article... So this is the best moment, I think.
What does the coverage means?
Let's start with this twit:
Two projects, one has 95% code coverage with tests, one has 45%. You’re going to be paid per bug found. Which one do you want to work on?
Well... I agree most people saying it depends. I have not enough information to decide.
But my first question different from other people: "What kind of coverage?"
Usually companies measure the coverage without thinking about it, but coverage given by different tests means different things.
Unit tests coverage may mean our intention to test. A high coverage produced by unit tests means we have tried to test it carefully, despite it doesn't mind we have done it really.
It is very difficult to increment the coverage with unit tests. Indeed, it is very difficult to have really good unit tests.
In the other hand, Integration tests coverage show us the amount of code that is not tested in any way. I mean: It is not a measure of goodness but a measure of doubt. You can be sure the tested code works at least in a use case, but not tested code... Is a mystery. I've had python code failing in simple methods because of a typo, just because there were no tests.
And finally, Acceptance tests coverage show what methods are probably not used or not useful. No body took care to test them, so probably there is no flow to get them and they are deprecated or they are giving unnecessary functionality.
Sadly, companies trend to join all these percentages.
In general, the number that tells more is the not covered code, despite people just look to covered code.
What really gives a high coverage?
Well... it gives you probabilities.
As an example, FormatMessage, a method in Windows API, is difficult to have high coverage: too many arguments, too many flows. It would require a lot of tests. But not covered lines tell us where the code is prone to fail.
The tipical math.sqrt function, which returns the square root of a given number, is easy to get a 100% coverage without testing it at all: may fail with zero or negative values, or even just work just for a couple of numbers. Then, coverage tell us... nothing, maybe.
You can speak just about not covered code, because it is unknown.
In addition, a high coverage in unit tests may mean that we have very small methods, we wasted a lot of time testing or we have very bad unit tests (being integration tests instead). Probably.
I really, really love the article Code coverage goal: 80% and no less!, by Alberto Savoia. Coverage is something to help us to be sure about what our code does and its flaws. It cannot be used as performance measurement or developer goodness.
I always say: To have a 100% coverage is easy. The difficult part is to test it all.