Monday, March 5, 2018

Predictability as Maturity or System?

The predictability of a team is subject to the predictability of the work.

Duration of a task depends on three things primarily:
  1. Raw effort -- fairly predictable, measurable, repeatable -- easy stuff.
  2. Risk -- chance we might break something and have rework or damage; VERY hard to predict in advance of efforts, extremely hard to detect without significant effort in automated testing AND exploratory testing. This can bring us late delays that cannot be ignored. 
  3. Uncertainty -- the amount and difficulty of the learning we will do, along with the chance that we may hit dead ends and have to start over. Cursedly hard to predict even order of magnitude.

Interesting and Uninteresting Work


Work that is mostly raw effort is uninteresting. Nothing is learned there, nothing is innovated, nothing taxes or stretches the workers. It's mostly just typing. It is the vast minority of software work, because developers have a tendency to automate any uninteresting work. Not only does it save them a lot of time and tedium, but automating that work becomes interesting.

Work that involves risk and uncertainty is interesting work. It requires active minds and research and study. Easily 90% of all software work is interesting work. 

When you ask developers what they want to do, they will almost always tell you that they want to do interesting work. They want to take on hard problems and work them to completion, preferably with a minimum of distraction since risk and uncertainty will require their full attention. 

Developers will also talk about safety to innovate and try things that might fail. Again, these are natural parts of doing interesting work.

Maturity?


Phrasing the ability of a team to estimate accurately as "maturity" is unhelpful.  The likelihood is the opposite. You give more mature and skillful teams more interesting work, and thereby less predictable work. It's the immature teams that churn out identically-sized bits of raw effort per week and who may be able to put in extra hours without degrading the quality of the result. After all, you give them the uninteresting work.

If the work has high uncertainty and high risk, then the team could be composed of genius-dripping consummate professionals and the estimates will not be consistent.  

Consistency?


Organizations often expect teams to have consistent velocity and consistency in their SAY:DO ratio regarding estimates. That plays well with the idea of developers (coders, testers, ops, etc) as unskilled laborers in a simple or possibly complicated system.

If the work is interesting, and some work is more interesting than other work. As such, the velocities and estimates will be different among different teams because different teams do different work. We should respect this as natural variation, not condemn and eliminate it as special variation.

But, We Need Accurate Estimates!


You probably do. It is possible, even likely, that you've built a system of work around the idea that software is primarily typing, that developers are unskilled laborers, and that work is generally simple and occasionally complicated, but that the only real risk and uncertainty are in relation to the calendar. 

Do you need to need accurate estimates? Maybe you do. In that case, don't take on interesting work. Do only predictable things that have been done many times before, in a tried-and-true technology, for customers you understand very well. Don't complain about a lack of modernization, automation, innovation, or what-have-you. Keep it dull. You will probably lose some developers who like interesting work, but maybe you'll maintain a staff base of people who like trading n-hours-per-week for a paycheck, and those are the predictable ones anyway. 

But maybe you don't really need accurate estimates. Maybe you can learn to either do your work as an ongoing evolution of a software product using fixed staff and flexible scope. Maybe you can use story mapping and exception mapping and other story-splitting techniques to manage risk. Maybe you can have developers learn TDD and BDD and automated testing along with manual testing and coding. Maybe you can provide scheduled room for experimentation and dead-end mitigation. 

Possibly you don't need estimates at all but haven't considered what a different system it would have to be in order to stop relying on estimates. That's a topic for another day, but maybe you could do a bit of research into other ways of working.

If you are going to do interesting work, though, you can't insist on accurate estimates. You'll have to tune your process to allow for risks other than date-and-content risks by adding slack, testing, and support.

Either way, you probably want to make sure you invest in refactoring, so simple things don't become risky and thereby unpredictable. 

Your code should always be as easily-workable as easily-fixable as possible. That still counts.

The Inevitable

If the work is unpredictable, as evidenced by our history of poor estimation, then perhaps the lesson is that inaccuracy of estimates is not a case of maturity or insufficient effort.

If estimates have always been inaccurate, we have to accept that estimation error is (to us) inevitable.

If organizations all across the industry are also bemoaning poor estimates, then it's probably not just us.

So we treat it as normal and inevitable and keep looking for someone to come up with a better way.

It's how we've survived storms and market trends and other unpredictable elements for centuries; we accept it and allow for it.

If one or two "bad" estimates (ie estimates that turn out to not match actuals) will ruin a business plan and bad estimates are inevitable, then the business plan is fragile. If we can't have better estimates, we'll need more robust plans.