Friday, September 10, 2021

FIRST: an idea that ran away from home

Quite some years ago, Brett Schuchert and I invented the acronym FIRST for micro-tests, though we called them "unit tests" (as was common at the time). This was later included in Agile In A Flash and was also published in a Pragmatic Programmers' article.

It’s grown a following and has been widely repeated. At this point, the idea just exists in the aether, and authorship isn’t often considered or cited. I suppose it has become "common knowledge" at least in some circles. It's usually not even given a citation, so I guess it's become a thing in its own right.

I’m still proud of Brett’s work and my small contribution to it. I'm glad it has taken on a life of its own, but I'm aware of when it's poorly described or when it's corrupted and am offended when people present it as their own unique work (or take praise for it, knowing that it is not original).

I get it, though. I'm sure there are many people whose work I didn't know how to credit, and whose work I may have likewise twisted to my own ends or interpreted in my own context whether I realized it or not. 

The acronym is pretty simple

  • FAST - you can run all your microtests so quickly that you never have to decide to run them later or now.
  • Isolated - tests don't rely upon either other in any way, including indirectly. Each test isolates one failure mode only. 
  • Repeatable - The test always gets the same result. This is largely a matter of good test setup and cleanup, but also the consideration of things like time of day, network status, database, global space, configuration, etc never being changed. 
  • Self-validating - a test is pass/fail. You never have to examine the input, output, or system state to determine if a test has passed. It is either green (passed), red (failed), or else the test did not run to completion (error), and it states the fact very clearly.
  • Timely - microtests are not to be written in batches, either before or after coding. They are written with the code, preferably just before adding a line or just after finishing a tiny change to a function.
Some people change the T to Thorough because they don't want to encourage TDD. 

I think this is a mistake. It's not "just as good" to have high coverage from a batch of tests written after the code is finished. It's not even nearly as good.

Done correctly, the tests inform and guide the code. 

  • They help us to consider our code first from the point of view of a user of the code, which results in more sensible and cohesive APIs. 
  • The tests make acceptance criteria a primary concern. If we don't know what result we are expecting, we can't write the test for it.
  • They make testability a primary concern: we can't write a test if we can't call the method or can't observe the result of making the method call. There is no "untestable code" if the tests come first.
  • As we realize we have corner cases, we add more tests, so that it's clear when we have covered the corner cases in the production code.
  • Because tests are written first, they are a kind of standalone documentation - you read the tests to understand the code. When tests are written after the code, they tend to take the structure and algorithms of the written code for granted: you must read the code to understand the test.
  • Whatever code passes the tests, that code is sufficient. The tests help us recognize a complete state.
  • Since the invariants for the production code are tested automatically, we can refactor the code to a different shape and design with the confidence of knowing that all of the tests still pass so we haven't violated any of our design invariants.
  • Each time our tests pass (run green) we are invited to reexamine the production code and the tests for readability and maintainability. This allows us to practice code craft continuously.
  • Because our intentions for the code are captured in the tests, we have externalized our intentions. We can be interrupted and quickly regain our context in the application - something that can't happen if we're in the middle of transcribing a large plan from our heads into the codebase.

Done post-facto, the tests have no way to influence the organization and expression of code in any meaningful way. Writing tests after the code in order to increase coverage becomes drudgery.

Where to find it.
Any quick google search will turn out dozens of articles. 


No comments:

Post a Comment