Tuesday, May 15, 2012

A Process For Naming Tests

The excitement (aside from work and family travel) lately has been at Agile In A Flash, where we released a new blog and card which reveals a process for naming tests.

After the naming papers I've written while at Object Mentor, and the chapter I supplied to Bob Martin's Clean Code and the subsequent video episode, I am known as a "naming guy."  I'm expected to always have a choice name in mind, in line with my own naming rules, for any circumstance in which I might find myself.

True to form, anyone pairing with me runs the risk of being exasperated at my constant two-step of "What's that for, really?" and "Can we rename it right now?"

My coworkers are often surprised when they see me use a silly or meaningless name early in a test or body of code. Why would I not know exactly what to name a variable, class, method, or test? How is a test fixture not obvious to me from the very beginning?

Roy Osherove, in initial shock at the idea of not proceeding name-first, kindly tweeted a link to our post:

When pressed, he told us that he felt that we should always understand the work we are going to do before we start it, so the names should be obvious. This was the primary disagreement we had with Roy. I appreciate the tweet and also the starkness of the contrast. Quite often we have a pretty good working title to begin with, and see no need to change it.

What we've provided is a process that doesn't require you to be sure to begin with.

I admit it: I'm not always sure where exactly I'm going next. I sometimes am not sure if the next test will be part of the same fixture or should have a different setup and teardown. Sometimes I know that the next test I write is trivial and will be followed by something more meaningful. Sometimes I'm looking at code and characterizing it, rather than fully understanding everything about it before I start.

Some of us might always have an answer in hand, but I'm not a guy who will sit on my hands until I see it. My feeling is that naming is far to important to settle on a final name before you explore the territory, and that there's a bit of parallel evolution going on.

I think it's important that the test and the code and all the variables and classes have the best names that my partner and I can give them or else our work is not done. I just don't confuse "can't finish without" and "can't start without."  I can start with silly names, and improve them.

This mirrors my feelings about design and architecture. You should have something in mind when you start, and you can't call your work finished until you have good architecture and design. That doesn't mean you must delay starting until you have a final answer. You can grow your answer incrementally even if you think you know what it's going to be.

Remember: evolution isn't the damage control done by failed species; it is how the winners keep winning. Given a choice between perfect planning and evolution, I'll bet on evolution every time.

Friday, May 11, 2012

TDD for C++ Programmers

It's official.

Your Agile Otter has joined forces with Jeff Langr again, and we will be producing a new Pragmatic Programmers project.  This time it's really a book, not a deck of cards, but we will try to maintain the same level of imagination and insight.

Thursday, May 3, 2012

Unfair advantages

In my agile training classes, I run an experiment wherein people try to estimate and deliver work in a very short time box. I typically run three iterations, so teams can learn from their experiences and experiment with different work styles. I do not give them guidance, but allow them to seek their own paths and then I tabulate the results.

To the team with the highest "score" at the end of the iteration, I ask what their competitive advantage is -- how they managed to get more done than their peers. Oddly enough, the answers tend to be the same no matter whether I'm teaching programmers, test engineers, or high-level executives.

Here is the summary:
  1. Teamwork.
    Pairing and tripling the people on a task means that more gets done sooner. Teams relying on individual work assignments consistently underperform compared with their teamwork-ing peers, and often deliver nothing at all until they also adopt team-based work.
  2. Smaller Tasks.
    Winning teams trim only start work that can be completed in the time box. They find that they can achieve more points worth of small tasks than they can large tasks. Possibility this means merely that humans naturally underestimate large tasks, but the results seem to be pretty consistent so far. Doing less, faster seems to be one of the tricks that bring success.
  3. New Perspectives
    Teams that swap out partners tend to complete tasks quicker. They believe it is because any two people can get stale looking at the same problem for a period of time, but a new perspective can find solutions that the others hadn't considered.
  4. Superior Skill
    The tasks I give to simulate programming are puzzles. Sometimes a team has a member who is expert in solving a given kind of puzzle. The pair or triplet that contains an expert is quicker to complete their tasks than those who do not. Sometimes 1/2 the team's score comes from the expert and her partners.
This is not a scientific survey using skilled full-time developers over the course of weeks or months, and I'm willing to admit that the context of a simulation may not fully map to the real world of software development.

But what if it did? 

How much further could your team go if they were to sieze an unfair competitive advantage of backlog grooming, pairing, swapping pair partners, and continuously developing their skills? What if we could do the same thing at other levels of the organization, above the scrum teams?