Monday, January 15, 2024

Fundamentally Wrong

The Problem

An article has been shared with me by several friends and also by some critics, and some people who are both, describing how TDD is fundamentally wrong, and doing test-after-development is better.

To be fair, the process described here is fundamentally wrong:



The problems with step #1 would indeed lead to the wasteful problems described below it. The recommendation here would certainly be better than the process described above:



TDD Is Fundamentally Wrong is Fundamentally Wrong


Now, the problem with this article is more fundamental than the problem being described.

TDD does not mean "Write all the tests then all the code"

It has never meant that.

That is not TDD.

That is some other misbegotten travesty that has no name.


This is the fifth or sixth time I've heard anyone describe TDD as writing all the tests first. In all cases except one, it has been described by people who self-describe as being anti-TDD, and who write articles decrying the foolishness that they identify as TDD (which is not TDD).

I have never seen anyone do TDD that way -- even unsuccessfully.  I have never seen anyone even try to do TDD that way. I would never sit by while someone tried to do that and called it TDD. That's simply not the process.

The one time that I read an article that actually recommended doing that, it was from a Microsoft publication early in the XP/Agile days. The public outcry was great and sudden, and the article was retracted. I don't think I've ever seen anyone else recommend that approach,  because that approach is so obviously flawed and was never the process used by original XP teams.

So What Is TDD?


TDD was originally described as a three-step dance that repeats over and over as one develops code.

You can take that straight from the horse's mouth (so to speak):



To these three steps, we (at Industrial Logic) added a fourth and final step.  Others may also have independently added this 4th step, I don't know.

  1. Write a test that does not pass, and in fact cannot pass, because the functionality it describes does not exist.  This failing test is the RED step, so-called because test-running programs generally produce the results of a failed test run colored red.
  2. Write the code that passes the test.  When the test passes, it is typical that the test-running program will present the results colored green, so this is often called the GREEN step. The code may not be optimal or beautiful, but it does (only) the thing the test(s) require it to do.
  3. Refactor the code and test so that both are readable, well-structured, clean, simple, and basically full of virtue (so far).  Refactoring requires the presence of tests, so this way we can refactor as soon as the code passes the tests, rather than waiting until after all the code for our feature/story/task is finished. We refactor very frequently.
  4. Integrate with the code base. This will include at least making a local commit. Most likely it will be a local comment and also a pull from the main branch (preferably git pull -r). More than half the time, it also includes a push to the shared branch so everyone else can benefit from our changes and detect any integration issues early.


 We repeat this cycle for the next test. The whole cycle may repeat 4-10 times an hour.


We do 1-2-3-4-1-2-3-1-2-3-4, we do not do 111111111-222222222-44444-(maybe someday)333.  These are not batched.

Was It A Misunderstanding of the List Method?

Some people misunderstood Kent Beck's List Method, in which you begin with a step 0 of writing down a list of the tests you think you will need to pass for your change to be successful (see the screen shot and link to Kent Beck's article). 

Note that you only make a list of tests. You do not write them all in code.

As you enter the TDD cycle, you take a test from the list. That may be the first test, the easiest test, the most essential test, or the most architecturally significant test. You follow the 4-step (or 3-step) dance as above. 

If you realize a test is unnecessary, you scratch it off the list. Don't write tests if the functionality they describe is already covered by an existing test.

As you make discoveries, you add new tests to the list. That discovery may lead you to scratch some unwritten tests off the list. That's normal.

Eventually, you will note that all the tests on your list have been scratched out. Either you implemented them, or you realized they're unnecessary. This applies to the tests you discovered as well as the original list.

You've done everything you can think of doing that is relevant to this task, so you must be done. This is doubly true if you have a partner also thinking with you, or even more certain if you have a whole ensemble cast working with you.

You never had to predict the full implementation of the features and write tests against that speculative future state.

It's a tight inner cycle, not a series of batches.

Do I disagree with the article, then?


Indeed, the "write all tests first" would only work for the most trivial and contrived practice example. It would never suffice in real work where 11/12ths of what we do is learning, reading, and thinking.

As far as the process being unworkable, I totally agree.

As far as that process being TDD, I totally disagree. 

That characterization of TDD is fundamentally wrong.


1 comment:

  1. I recently came across the same article shared by friends, critics, and a mix of both. It presents an interesting debate on TDD versus test-after-development. While the perspectives are thought-provoking, as someone managing tight schedules and relying on services to take my online exam, I find that test-after-development aligns better with real-world efficiency and adaptability. Great insights!

    ReplyDelete