- See each test fail at least once (so you can trust it).
- Make test fail messages helpful because they fail when you are working on something else.
- Use a list for tests you want to write. "Ignored" tests will do nicely.
- Run all the tests so you know when your last change has broken something.
- Keep your feedback loop as tight as you possibly can.
I get to see these all violated, so I thought I'd make a short list and save you some time.
The first two go together. You want each test to fail so you can see the message. Some time in the future you'll make a change, and an older test will fail and you'll see the test class name, the test name, and the assert message. Those three should work together so that you know what kind of mistake you made.
That goes with number 5, too. If you only run the one test you're working on, you may have dozens of breakages by the time you get around to running all the tests. If you work on a component team (not my favorite organization, but sometimes necessary) then you should run all the component's tests at least. The more "distance" between the injection of an error and its detection as an error, the harder it is to isolate and reproduce and fix. That's why #6 is listed.
That leaves 3 and 4, which are about planning your steps. You might find some power in the idea of a list. Start by thinking, and create a list of tests you want to write. Then pick which one you want to do next based on what is either simpler or most important. When you think of new tests, add them to the list. When you find some tests are no longer important or describe cases that are already covered, you drop them. I learned this by watching Kent Beck's TDD videos.
If I was to shorten the list, I would say:
- Write short tests with great messages
- Track and the next small steps you intend to take.
- Keep safe by running all tests frequently.
That's short, but not as actionable. In the long 6-step form, it's a little easier to take on.