Test driven development (TDD) is one of the sacred cows of certain segments of the agile community. The theory is that,
1. If you write tests before you write behavior, it will clarify your thinking and you will write better code.
2. The tests will expose the need to remove unnecessary coupling between methods, because coupling forces you to write "mocks", and that is painful.
3. When the code is done, it will have a full coverage test suite. To a large extent, that obviates the need for "testers" to write additional (functional) tests.
4. The tests define the behavior of the code, so a spec for the code's methods is not necessary.
Many people in the agile community have long felt that there was something wrong with the logic here. What about design? To design a feature, one should think holistically, and that means designing an entire aspect of a system at a time - not a feature at a time. Certainly, the design must be allowed to evolve, and should not address details before those details are actually understood, but thinking holistically is essential for good design. TDD forces you to focus on a feature at a time. Does the design end up being the equivalent of Frankenstein's monster, with pieces added on and add on? Proponents of TDD say no, because each time you add a feature, you refactor - i.e., you rearrange the entire codebase to accommodate the new feature in an elegant and appropriate manner, as if you had designed the feature and all preceding features together.
That's a-lot of rework though: every time you add a feature, you have to do all that refactoring. Does it slow you down, for marginal gains in quality? Well, that's the central question. It is a question of tradeoffs.
There is another question though: how people work. People work differently. In the sciences, there is an implicit division between the "theorists" and the "experimentalists". The theorists are people who spend their time with theory: to them, a "design" is something that completely defines a solution to a problem. The experimentalists, in contrast, spend their time trying things. They create experiments, and they see what happens. In the sciences, it turns out we need both: without both camps, science stalls.
TDD is fundamentally experimentalism. It is hacking: you write some code and see what happens. That's ok. That is a personality type. But not everyone thinks that way. For some people it is very unnatural. Some people need to think a problem through in its entirely, and map it out, before they write a line of code. For those people, TDD is a brain aneurism. It is antithetical to how they think and who they are. Being forced to do it is like a ballet dancer being forced to sit at a desk. It is like an artist being forced to do accounting. It is futile.
That is not to say that a TDD experience cannot add positively to someone's expertise in programming. Doing some TDD can help you to think differently about coupling and about testing; but being forced to do it all the time, for all of your work - that's another thing entirely.
Doesn't the Agile Manifesto say, "Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done."
I.e., don't force people to work a certain way. Let them decide what works best for them. Don't force TDD on someone who does not want to work that way.
But if everyone on a team does TDD, there is consistency, and that is goodThe argument is always, "If we all do TDD, then we can completely change our approach as a team: we don't need testers, we don't need to document our interfaces, and we will get better code as a team. So people who can't do TDD really don't fit on our team."
So if Donald Knuth applied to work on your team, you would say, "Sorry, you don't fit in"; because Donald Knuth doesn't do TDD.
What ever happened to diversity of thought? Why has agile become so prescriptive?
Also, many of the arguments for TDD don't actually hold up. #1 above is true: TDD will help you to think through the design. But, TDD prevents you from thinking holistically, so one could argue that it actually degrades the design, and constrains the ability that many people have to creatively design complex things. And that's a shame. That's a loss.
#2 about improving coupling is true, but one does not have to do TDD for that. Instead, one can write methods and then attempt to write unit tests for them. The exercise of writing the unit tests will force one to think through the coupling issues. One does not have to do this for every single method - something that TDD requires - one can merely do it for the methods where one suspects there might be coupling issues. That's a-lot more efficient.
It can be argued that the enormous number of tests that TDD generates results in less agility - not more. Full coverage tests at an interface level provide plenty of protection against unintended consequences of code changes. For those who use type-safe languages, type safety is also very effective for guarding against unintended consequences during maintenance. One does not need a mountain of unit tests. Type safety is not about productivity: it is about maintainability, and it works.
#3 about code coverage is foolish. The fox is guarding the henhouse. One of the things that tests are supposed to check is that the programmer understands the requirements. If the programmer who writes the code also writes the tests, and if the programmer did not listen carefully to the Product Owner, then the programmer's misunderstanding will end up embedded in the tests. This is the test independence issue. Also, functional testing is but one aspect of testing, so we still need test programmers.
One response to the issue about test independence is that acceptance tests will ensure that the code does what the Product Owner wants it to do. But the contradiction there is that someone must write the code that implements the acceptance criteria: who is that? If it is the person who wrote the feature code, then the tests themselves are suspect, because there is a-lot of interpretation that goes on between a test condition and the implementation. For example, "When the user enters their name, Then the system checks that the user is authorized to perform the action". What does that mean? The Product Owner might think that the programmer knows what "authorized" means in that context, but if there is a misunderstanding, then the test can be wrong and no one will know - until a bug shows up in production. Having separate people - who work independently and who both have equal access to the Product Owner - write the code and the test is crucial.
I saved the best for last. #4.
Let me say this clearly.
Tests. Do. Not. Define. Behavior.
Tests. Are. A. Horrible. Substitute. For. An. Interface. Spec.
Tests do not define behavior because (1) the test might be wrong, and (2) the test specifies what is expected to happen in a particular instance. In other words, tests do not express the conceptual intention. When people look up a method to find out what it does, they want to learn the conceptual intention, because that conveys the knowledge about the method's behavior most quickly and succinctly, in a way that is easiest to incorporate into one's thinking. If one has to read through tests and infer - reverse engineer - what a method does, it can be time wasting and confusing.
The argument that one gets from the TDD community is that method descriptions can be wrong. Well, tests can be incomplete, leading to an incorrect understanding of a method's intended behavior. There is no silver bullet for keeping code complete and accurate, and that applies to the tests as well as the code comments. It is a matter of discipline. But a method spec has a much better chance of being accurate, because people read it frequently (in the form of javadocs or ruby docs), and if it is incomplete or wrong people will notice it. Missing unit tests don't get noticed.
If people want to do TDD, it is right for them and it makes them productive, so let them do it. But don't force everyone else to do it!
Long live testing!