Code coverage versus test coverage has been a hot topic lately in the QA/DevOps community. For those who aren’t familiar with these terms, code coverage is a measure of code that is executed throughout testing, while test coverage is a measure of how much of the feature being tested is actually covered by tests*.
Recently, we came across a fantastic piece written by Dan Ashby (Head of Software Testing, eBay) thanks to Michael Bolton (software testing professional and evangelist), who shared the illustrative piece on Twitter last week.
Ashby’s article helps to differentiate the often-conflated topics of code coverage and test coverage with a simple metaphor, a child’s push toy. He explains the idea better than we ever could, so we highly suggest you give his article (linked above) a quick read before continuing on.
Ultimately, he argues that a potential code coverage of 100% has no bearing over the actual test coverage, that may or may not be totally accounted for to begin with.
In other words, code coverage is objective but doesn’t tell us how well tested a software really is, while test coverage is subjective and therefore often discounted.
So, with all this in mind -- and after giving Ashby’s article a quick read -- what do you think?
How does your organization measure or track how well tested your software is?
*Michael Bolton prefers to explain test coverage as "how much /testing/ has been done (with respect to some model)."