Posts

Showing posts from November, 2008

Extreme Measures

Shorten iterations to force priority SME can only help others complete tasks Require 40% stories 100% done at midpoint Revert/discard work over three weeks old Random weekly team roster to force closure Stir pairs twice daily Eliminate individual tasks Sometimes one has to take some extreme measures to help a team over the hump in their agile transition. It is hard to adjust work habits without having a work environment that depends on new behaviors. These extreme measures may stick, or may be training wheels for extreme programming. Shorten Iterations Shorten iterations to force priority. Cause the Customer role to pick fewer things to do, more often. This also should force developers to reach closure on cases more quickly. If the team is used to letting things lag and stack up for some future day, shortening the iteration can help them get into the habit of finishing things more quickly and taking on less work. SME Has No Tasks SME can only help others complete tasks. This rule ...

Python Pimpl Pattern

A classic unit test blunder is to make use of the system time freely in your code. Another blunder is to monkey-patch your preferred time function. I was working with some ATs which failed because they were written with a date in mind, and the calendar has marched on since those days. The answer is fairly obvious, to override date function. With a little searching, I find a utility fixture for forcing a given date/time. It worked as long as I ran the test in isolation, but failed when I ran the test in its suite. Code in the system performed imports as "from mx.DateTime import now", and 'now' became a stable reference to whatever mx.DateTime.now happens to be. If you change the reference in mx.DateTime, it doesn't affect your stable reference. It binds at the time the mx.DateTime importer is loaded. Now, python does some nice optimization. When you import a file, it doesn't necessarily read the file from disk. If the file is already loaded, it merely maps ...

Acceptance Test Qualities

I'm involved in writing a new agile agile guide with Jeff Langr . We are taking agile concepts and trying to boil them down to the simplest forms that cover the bases reasonably well. It is rather like playing The Three Things (AKA "the Two Things") game for Agile software development. An example: Acceptance Tests Define “done done” for stories Must be automated Document all uses of the system Should be usable as the basis for system documentation Do not replace exploratory tests Run in as-close-as-possible-to-production environment This list is intended as a small set of reminders, so that when one is in the midst of a project, one might find some guidance. Is the test really fit for use as documentation or written as programmer-ese? Is it describing the feature well enough to guide development? Is the Continuous Integration environment running it in a naive or unusual system configuration? Should we run these tests manually? The bullet list should speak to you. If n...

Agile Progress and Branching

This week, and last, we are doing our work in the release candidate (RC) branch, which will eventually be merged to trunk. We maintain a "stable trunk" system, with the RC as our codeline (for now). This is an intermediate step on our way to continuous integration. Partly because of the change in version control, the team has learned to rely more upon the tests, and is writing them quickly. We have had a noticeable increase in both unit tests (UTs) and automated user acceptance tests (UATs) in only one week There were some problems with people checking in code for which some tests did not pass, but they have learned very quickly that this is quite unwelcome. We are painfully aware of the time it takes to run both test suites. The UTs suffer from a common testability problem, in that they were written to use the database and they sometimes tend to be subsystem tests rather than truly unit tests. When they are scoped down and mocking is applied, they should be much faster...