Monday, November 3, 2008

Acceptance Test Qualities

I'm involved in writing a new agile agile guide with Jeff Langr. We are taking agile concepts and trying to boil them down to the simplest forms that cover the bases reasonably well.

It is rather like playing The Three Things (AKA "the Two Things") game for Agile software development. An example:

Acceptance Tests

  • Define “done done” for stories
  • Must be automated
  • Document all uses of the system
  • Should be usable as the basis for system documentation
  • Do not replace exploratory tests
  • Run in as-close-as-possible-to-production environment


This list is intended as a small set of reminders, so that when one is in the midst of a project, one might find some guidance. Is the test really fit for use as documentation or written as programmer-ese? Is it describing the feature well enough to guide development? Is the Continuous Integration environment running it in a naive or unusual system configuration? Should we run these tests manually?

The bullet list should speak to you. If not, then read through the explanation below.

Define “done done” for stories

Clearly some of the greatest value in ATs is that they are executable specifications. No work should be assigned for completion without some ATs first being created that describe the feature fairly fully. I tend to not require fully comprehensive coverage for all ATs, but I find that sometimes I am wrong not to. This point is as important as it is difficult. We are frequently finding "missed requirements" or "unexpected interactions." The answer for these is probably not to have full Big Design Up-Front (BDUF) but to find a more agile way to deal with corrections and changes.

Must be automated

ATs really have to be automated. Manual testing simply cannot scale. We can expect to run every automated test we've ever written a few times a day, but could hardly expect to run all of the manual tests we could have written even once every two weeks. Automation doesn't just make testing convenient, it makes continual testing possible.

Document all uses of the system

Even uses of a system that pre-exist the team's agile transition still need tests. This is because the second value of acceptance tests is in preventing regressions or detecting brokenness. It is never a good time to be ignorant of the fact that you've broken your system.

Should be usable as the basis for system documentation

The third value of the ATs is that they document the system. That should make it easier for people whose job is also to document the system. Often this power of testing is overlooked, especially when the test are written in a non-literate style.

Do not replace exploratory tests

Of course, automated tests are never complete and features are prone to have unintended interactions or consequences. Professional testers are valuable teammates. Their exploratory testing may uncover things that programmers, intimate with the workings of their code, might not.

Run in as-close-as-possible-to-production environment

Finally, tests need to run on their target platform. It happens, though. It's better to find any platform issues earlier in the process though. If the tests include a database, it ought to be the same kind of database you'll see in production. Likewise file systems, network hardware & software, etc. It might be handy to have a CI system run the tests once on a development-like system and then install and run again on a production-like environment.