During Agile 2011 I tried to speak to people about what technical practices they did, and to what degree as well. Unfortunately I left the conference slightly disappointed and now understand why Uncle Bob champions the Software Craftsmanship movement as a way of addressing a balance the agile community at large has lost.
Don’t get me wrong. I appreciate that we need to focus on people and system issues in software development just as well. However, it’s laughable to think that software development is the easy part. I’m not the only one who feels this way. In order to succeed at real agility, you need just as much discipline and attention to detail to the technical practices, tools and methods just as you do everything else. It seemed like people were trying to be a good sports team, by finding the best people, hiring the best coaches, and then forgetting to schedule time for the drills and training.
One particular conversation stuck with me. We had this in the airport lounge at Salt Lake City before departing for Chicago. It kind of went something like this:
Other: We only tend to TDD the happy path
Me: Oh? Out of interest, what sort of test coverage do you end up with?
Other: About 20 or 30%. Why, how about you?
Me: Depends on what sort of application. Adding up the different types of tests (unit, integration, acceptance), I’d probably guess in the high 80s to 90s.
Other: Wow. That’s really high.
This conversation got me thinking about whether the other person’s team really benefits from doing TDD. I’m guessing not. They are probably getting some value from the tests, but probably equal value by doing test after. If you really wanted to be a pedant, you could argue, how do you do TDD without actually driving (most of) the code with tests? I’m not a TDD zealot. I’ve even blogged about when to skip TDD, however one of the biggest benefits of writing tests is the feedback they give and the way that it changes the way that you design code. I’m guessing by skipping the “unhappy paths”, there’s plenty of feedback they miss out on.
If their coverage is in the order of 20-30% then, by definition, they’re not doing TDD. This is pretty typical. Teams tell me “we do TDD” when what they really mean is “we write some unit tests”. Evidence strongly suggests that bugs will start to appear in the 70-80% of untested code, and making changes to the code will only be marginally safer when most of the code isn’t being regression tested automatically.
What was the reason their test coverage was so low? Were they doing just some rough ATDD style of tests or did they choose the parts they
decided to test drive randomly?
I believe having 100% test coverage as an absolute goal in itself is
not a very pragmatic approach but I do think a developer should be
able to clearly argue/justify why certain parts of the code have lower coverage.
TDD rule #1: “You are not allowed to write any production code unless it is to make a failing unit test pass.”
That one rule means that TDD gives you 100% coverage.
There are no exceptions.
Your, “Other,” has 30% test coverage ergo, “Other,” is not doing TDD.
You have 90% coverage ergo you are not doing TDD.
This post got me thinking about whether your team really benefits from doing TDD. Iโm guessing not. They are probably getting some value from the tests, but probably equal value by doing test after.
Jason,
“If their coverage is in the order of 20-30% then, by definition, theyโre not doing TDD.”
I agree that they are not doing TDD, but which is the official TDD definition? Kent Beck’s book?
And where is this strong evidence suggesting “that bugs will start to appear in the 70-80% of untested code”. Do you mean scientific, published evidence?
Actually 70-80% seems to be a rule of thumb, not a scientific fact.
Good point, Pat, on an important topic.
Funny how the comparison with sports teams so often helps explaining a quite basic point…
Thanks!
Agile without disciplined dev practices is like cookies without dough…
Olaf
At the first, the unit test shouldn’t cover only 20%-30%, it’s not TDD and maybe the unit test code just to satisfied supervisor or some boday else, if we do it like that, the unit test has lost the original meaning and to be frank, it’s a overloaded methodology.
There’re two many metrics in TDD, actually i’m not a zealot on TDD, but i’m a zealot on process or skill which will make the team to be successful, not just the “Successful delivery” only. TDD is the way to try to extract the marketable value, because it will be come true from the customer perspective. It will push the communication with stakeholder and make some important and massive business defect in the beginning phase.
We should consider about some cost-value rate, balance the resource ,short-term advantage and long-term advantage.
I’m not a zealot on Automate testing as well, but we should be sober on any disciplines and methodologies. ๐
Nice post Pat! ๐ I completely agree.
You mentioned Test Coverage and it’s one of my favorite topics and one about which I’m quite passionate.
I’ve recently seen people measuring “quality” of the code and “maintainability” by the test coverage. It was quite high actually, but most of the tests were tautological (TTDD). The developers would write tests “just to achieve the coverage”. It’s almost like there’s a 1-to-1 relation between every single line of code and line of test… In fact, that takes you to 100% coverage very quickly… But it’s soooooo dangerous and hard to maintain and it adds no value whatsoever…
Nevertheless, you are so right when you say that if you really do TDD you should get a high test coverage. Although, if I could choose between 20 to 30% of GOOD tests compared to 100% of tautological tests, I’d go with the former ๐
My 0.02
so if i use TDD for key business logic yet admit my test coverage is low i “don’t do TDD”? don’t buy that.
@Jason – This “typical” interpretation is what worries me
@Tero – I agree 100% test coverage is not the goal. I never saw the code but from the conversation, it seemed like “TDD” = “happy path test” even if it was just the first one
@Olaf – Thanks for the comment
@Phoebus – I’m not so sure they were writing tests just to please their supervisor. It just seemed like too much of a “loose” interpretation of TDD.
@Fabio – Good points about tests for the sake of tests sake
@Anonymous – It wasn’t just the test coverage that triggered this post. It was the point that preceded it that triggered the question – “We only TDD the happy path.” Both of these smell like doing a practice for the sake of doing it. Of course, you could apply TDD to just the “key business logic” and claim you’re doing TDD. I’m sure you’d easily find places where TDD would have changed the design even if it wasn’t about key business logic. For me, it’s not just simply about adding test coverage, but fundamentally change the way you write code.
(Sorry, some comments got lost in another category)
@Shmoo – As Fabio points out – there is a point of diminishing returns depending on what you’re trying to TDD (it doesn’t have to be unit tests alone – integration and functional tests). I guess part of the trick is finding out where that cost benefit lies. I’m not so sure using the rule of thumb, “happy path only” is the best method.
OK, with that little coverage these particular folks are probably not doing TDD.
The main point of this post to me, however, is that lately agile community started caring way too much about soft stuff, planning, estimation, etc rather than about hard stuff and craftsmanship. Well, probably it is so, but is it really bad?
IMHO we see a usual overreacting pendulum. A while ago the hot topic was all about coding (recall days when everybody just had to go after objects), this time hot topic is the reorganization of teams, collaboration and planning, next iteration pendulum will swing back to the technical stuff again. Stories like this one are the early signs of it to me.
Adolfo: We can /reason/ to the fact that defects will start to appear where there is no test coverage, as follows.
1. As humans we insert defects into our code at some random rate.
2a. A defect inserted where there are tests/checks will be found.
2b. In areas not covered by testing/checking defects will not be found.
3. Therefore, defects grow where there is no test coverage.
In code where there are tests/checks, we justly have more confidence that there are no defects. In code where there are not, we must either have less confidence, or apply some other means to remove defects. There are other such means: I would bet that the low-coverage team does not use any of them. Therefore I would bet //a lot// that defect density is higher in their unchecked code.
All:
Doing TDD is not some all or nothing thing. At the moment that we write a line of code without a failing test, yes, we are not doing TDD. When lines of new code /are/ in response to a test, we may be doing TDD (there are additional requirements to TDD besides this one).
Either way, TDD is a practice, not some higher goal like good will toward people. It’s the best practice I know for rapidly producing clean code that works, which is why I use it so often. And when I do not use the TDD practice, I notice these things:
1. My code has more defects.
2. I am less confident about it (and justly so).
3. Some lack of skill in testing that kind of code seems to be the main reason I don’t use TDD there, not some inherent property of the code.
More briefly to those who want to say they’re doing TDD but only getting 30-30% coverage:
You’re not doing TDD very often. You might be doing it sometimes.
That could be OK. There are other ways to avoid defects. In my opinion, if you notice that defects crop up where you didn’t use TDD, you owe it to the people who are paying you to think seriously about upping your game.
Questions I’d consider asking Pat’s interlocutor:
1. How long have you been doing TDD?
2. Do you have plans to test-drive more than just the happy paths?
3. What changes have you noticed since you started doing TDD?
4. Do you like doing TDD so far? Would you like to do more?
There are others, but that would eat up enough time.
@Artem – I feel swinging pendulums are bad. Part of the agile manifesto (to me) is about harmonising both sides and the fact that you need to think about everything (technical and soft stuff together). To me, when I see both of these work, this is where the magic happens
@Ron – Thanks for the comment – Good point about doing TDD is not the same as all or nothing. My concern was really about whether or not they were getting much value of the doing it based on the short conversation I had.
@JB – Great questions to ask. As usual coming from you.
Pat, I don’t think soft-hard pendulum is that good, but that’s the way it happens.
Good news that as any averages and other statistics it doesn’t have to be true to a concrete organization. You can have a company that is outstanding in both hard and soft skills and improves on both,
TDD is so boring! For small apps it’s just a waste of time, especially if not everyone on your team likes it. TDD my ass.