Monday, March 11, 2019

Signal-to-Noise: The Workspace Application

I have seen people who think open spaces rock, and people declaring them “the worst possible layout.”

Some people like offices (find them “essential”) and others hate them.

I met people who liked their cubicles, while most find them soulless and dehumanizing.

Most collocated teams like sitting in “pods” but some don’t.

I have a theory rather than a great design. My theory begins “it depends.” It only becomes interested when we get to “depends on what”.

So, there is information in every space. Visual, audible, olfactory, etc. 
  • Some of that space is on your screen when you’re working at the computer.
  • Some is posters, drawings, whiteboards.
  • Some is discussion happening nearby.
  • Some is via information radiators. 
If you take away the information, then it’s harder to be both focused and aware. Starved of information, we will be less productive and do less valuable work. Think of being the one remote person trying to keep up on the changes to a project and the culture of your org without email/slack/etc. Or the dev org without customer feedback. Information matters.

That’s half the picture. The other is noise. 

Noise is just information that isn’t relevant to the work you’re trying to do or the group in which you’re maintaining membership.

Signal:Noise. Yep, that old chestnut.

But here you go: if you’re in an open space, and you’re sitting among people who are not part of your taskload, who don’t share your goals and outcomes, then being there in all the noise is unsatisfactory. 

Likewise, if people are in groups, but you sit in a group that isn’t your group, S:N sucks.

Being in an office or cubicle isn’t so bad (and can be beloved) if you do solo work most of the time, and your tasks are pretty well isolated from that of all the other people in the space. But if you’re positively interdependent with other people, then walls and doors just get in the way. 

This suggests something obvious: form should follow function; as you work, so should you sit. 

There is one other thing: you need a place for your humanity too: pictures of the family and pets, photos from your favorite vacations, posters from concerts, mementos of good times. Because you’re a human. In an impersonal space, one can feel detached from their non-work life. That kind of dis-integration is not all that healthy for most people. 

Long post, I know.
But I believe these things.
And so I don’t tell anyone what the “right” arrangement is. 


Note: this is republished from an email conversation I had a while back.

Wednesday, December 19, 2018

Q and A on velocity, Part VIII

In our previous post, we discussed doing smaller units of work, and doing no more of them than we need to fulfill an end-user's needs. This way, we get many things done, we deliver more quickly, and we develop less unneeded code (provided we get frequent feedback from users).

Fans of agile software development will recognize this as one of the axiomatic concepts behind agile, derivative of much earlier work on incremental and iterative development.  Without incremental, iterative development and feedback, a process can hardly be said to be agile in any way whatsoever.

So we pick up the conversation today at that point:
A: ...But I we get 23 points now, and they're as big as the 1/19th sized ones.
B: That has two possible explanations.
A: What is the first explanation
B: That they're assigning more points to same-sized work: inflation.
A: Why would they do that?
People in an organization are constantly trying to satisfy their bosses. When organizations put pressure on developers to produce more points, developers start to find ways to score more points. There are some hilarious examples in Joshua Kerievsky's article on story points at the Industrial Logic blog. Sometimes it is intentional but not always.

When a team turns in an 8-point sprint/iteration/whatever, and are disappointed, they often look back over the week and realize that they under-estimated some of the stories which were harder than they looked, and that is the reason it was only 8 points. It should have been 15. In reaction, they give higher point-count estimates the next time.  This kind of inflation isn't to deceive, but to better reflect reality and truth. But now the team turns in a more satisfying 15 point sprint, and exactly the same amount of work is being done.

That people misinterpret this as acceleration is not the team's intention. They merely are taking credit for the amount of work they actually did.

"Scoring points" and "having higher scores" classicly illustrates Goodhart's law. A measure (of completion rate) which is a reasonable measure, once chosen as a goal, ceases to be a valid measure.

You will hear development teams in scrum-like processes talking about work "blowing up."  That means "like a balloon" not "like dynamite."  There was some story that seemed like a 2-pointer, but turned out to take most of the sprint because there was more risk and uncertainty that anyone realized.

A story may have seemed simple: "add a discount code field." The product owner/manager/boss may have said: "that should only be a two-point story."

It seems easy. It should be easy to operate. What could be simpler? It's one field on a form, and you subtract some value from the total.

But these things are seldom simple. Marketing and accounting want tracking, and expiry for discounts, and demographic limits on the usage. Discounts need to appear on the customer's receipt as an indicator that they got a good deal from the company (marketing!). Discounts need to be represented in billing reports.

There may be special discounting reports used to measure the success of promotions. There may be anti-fraud ramifications.

Then someone decides that there should be many discount codes, and they should combine in some ways but not others. Now UX has to redesign the data entry screen, and there are complicated rules. Maybe those need a new microservice?

But it was a three-point story, right? This raises questions:
  • Why did the team only turn in 5 points this sprint? 
  • What's WRONG with them? 
  • Why are they slowing down? 
  • What can we do to motivate them to go faster?
  • Team X turned in 50 points compared to this team's 5, why are they 10x better? 
  • Does this team need a PIP?

These are the wrong questions.  They are based on a faulty assumption, and acting on them will only make things worse. We discussed curiosity spaces related to velocity in Part VI, which you may want to revisit at this time.

Ultimately, the thing to understand about teams using time boxes is that about the same amount of work is being done every week (barring absences and meetings and events on company time). Velocity is mostly illusion.

Consider Goodhart's law, curiosity, exploding stories, and inflation.

Then come back to join us for part IX.

Monday, November 5, 2018

Q and A on Velocity, part VII

In Part VI we discussed faux bottlenecks and blaming. These are often the result of having expectations that cannot be met, a phenomenon that was discussed in the last installment.

Let's pick up the conversation from that point:

A: I want a velocity of 30. Quit distracting me and tell me how to get that.
B: You know that a 19-point sprint means that one point is 1/19th of a sprint?
A: This I remember very well.
B: If you want features to be 1/30th of a sprint, make them 1/30th of a sprint in size and scope.

Our person A is becoming increasingly impatient. Whereas they have asked what seems to be a very simple question (how to raise the velocity) person B seems to be deflecting by discussing systems and human dynamics and methods of estimation, and the futility of improving speed by changing estimation units.

Rather than helping A make the plan and schedule successful, B seems to be hand-waving and declaring the plans and schedules unimportant and unreal. In short, B is not being helpful at all (as far as A can see).

On the other hand, B is quite aware that A has been asking the wrong questions. A is very much focused on the utilization of workers, on causing reality to fit early expectations, and on the disconnection of the workers from the need to fulfill promises. The right questions are just now starting to emerge.

B suggests to A that every week, pretty much the same amount of work is going to be done. If you want to count more things as done, you need to do many small things. If you want larger accomplishments, you must have fewer of them. The area of the rectangle stays roughly the same whether it is 1x6 or 2x3.

In part II, B wanted to talk about ways of increasing the area of the productivity rectangle, by making it easier to accomplish the work. This, however, was deflected by A in Part IV, as A wanted to increase the amount of effort rather supplied than decrease the amount required.

A is still looking for a way of cranking up the intensity for immediate impact, while B is still discussing working with greater consistency for sustainably better impact.






As such, B suggests working in Walking Skeletons, AKA Thin Vertical Slices. This provides a way of being done more often, on more things, and giving the managers the power to decide when a feature is "done enough."


Here is the agile way of working, in tiny pieces that are individually shippable (if such a thing is desired) and which will, together, ultimately be useful to the end-users.

Any feature might reach the 20% Pareto balance point -- the 20% that provides 80% of the value of the feature, at which time it is no longer that important to continue development. It may be that some other feature has become more important to work on, so that the last 80% may never be done.

This is what is meant by the agile manifesto principle "Simplicity -- the art of maximizing the amount of work not done -- is essential."  By doing less of each feature, we get more features 'done'. By doing less, we accomplish more.  By taking on smaller units of work, we complete more units of work.

The assumption that things will be done in entirety because they were visualized and designed in entirety? A non-agile idea. Likely there is a very satisfactory 'less' that could be provided. Not only is that true, but also having a satisfactory 'less' may give the users clarity about what the feature should really entail -- a clarity that they did not have at the fat end of the cone of uncertainty.

Software tends to work in wicked problems, where providing a solution to a problem changes the nature of the problem and problems bleed over into each other. As such, getting fast feedback helps steer products in a way that working hard does not.
A: They would be very tiny features.
B: Yes.
Here we see a light beginning to dawn, we hope, but here A recognizes that B is not increasing the rate at which the team is burning through the backlog, but is again trying to cope with the rate things are being done. A rejects this deflection once again.
A: But I want to do big things.
B: Then you will get fewer of them done for a given period.
A: Why can't I get more big things done per iteration?
B: How many gallons fit in a 5-gallon bucket?
Join A, B, and me for Part VIII.



Monday, October 29, 2018

Q and A on Velocity, Part VI


In Part V, we examined the relationship between working harder and going faster. I hope that the message you came away with was that we want people to work less hard and long so that they can deliver more, though there are conditions that have to be right if we want work to be more productive for the time invested.

Let's pick up there:

A: So how can we actually deliver functionality faster?
B: Ah, now that is a quality question. How long does it take to deliver a feature now?
A: Too long. We need developers to speed up.
B: What % of lead time is represented by developers' cycle time?
A: Why are you asking about lead time? I'm talking about development.
B: Lead time is a measurement of delivery. Development cycle time is only one element of lead time.
This part of the conversation illustrates a lack of curiosity about processes. Person A has clearly decided that development is the bottleneck, and isn't really interested in knowing how the rest of the process may be restricting or preventing delivery.

This is not uncommon.

Quite often the problem of long lead times will be attributed to development, or where post-development testing is practiced, to QA.

A friend of mine was involved in a large time-critical development project as a testing professional. He was called up in front of the directors of the company for "delaying the release."  The director of software development (separate from QA) argued that the QA department was not "passing the tests fast enough."

It was said un-ironically. The situation was that the QA department was finding an exceptional number of defects (partly, perhaps, because the development director decreed no testing, no design, no refactoring, and no testing in development in order to "speed things up"). The QA department was returning the code for rework, which was slowing down new development.

The problem? Well, the queue and the inventory appears at QA, so the QA people must be at fault for not moving the inventory through more quickly, right? Certainly not the developers who had faithfully produced mounds of defective code as fast as possible for months.

In other organizations, we have found development blamed for features that are stockpiled by marketing, who are waiting to have enough work produced that it makes a great release party advertisement. While value could have been released to customers, the idea of "making a splash" was too much a part of the culture of the organization. Development took typically a few days per story -- quite quick, really. But those features sat in the queue for months. As a result, some people in the organization had decided that development was "taking months to write the code." In this case, an architect refused to hear the statistics and the numbers and measures. "I disagree," said the architect who was presented with the stats, "the developers have to go faster."

Why is this? Because development and QA are where the rubber meets the road. In part III we stated that how long it actually takes is real; estimates are made up. When perfectly good (seeming) plans aren't met by real work, most of us tend to meet the disappointment by placing blame. The people pinched between the real work and the intentions are the easiest to blame. 

Where we should see a curiosity space, we tend to see a "problem."


If we see the difference as a problem, rather than a curiosity space,  we tend to see the people who are at the bottleneck as being the bottleneck.  This is an error that should be obvious to people with a modicum of systems thinking, one would think, but most people under pressure are more prone to emotional reaction than careful systems thinking.

Let's not blame people for not seeing what we hope would be obvious. To Pillory people for their "ignorance" is seldom helpful. It's just returning blame for blame. How does that help?

Instead, if we still our impulses we can approach the problem with a heart that is at peace, and seeking solutions rather than confirming suspicion and blame. Curiosity (and our sense-making apparatus) might have a better result here. 

A: I don't want to deal with lead time. Can we just focus on developer cycle time?
B: Of course, but you may not be working at the bottleneck so it may be wasted.
A: I only want to talk about developers.
B: Okay. As long as you are aware it may be pointless.

And here is the deep truth of this installment, which echoes so much of the Theory Of Constraints:
If we are not working at the bottleneck,
our efforts to improve flow are pointless.

This means that we may have to move beyond our unchallenged assumptions. There are a series of statements like "it's the devs", "it's QA", "you all aren't working hard enough", "you aren't taking this seriously" -- all assuming that problems are motivational rather than technical.

If one is faced with such a stonewall, what does one do? Let's pick that up in part VII.

Tuesday, October 23, 2018

Q and A on Velocity, part V


In Part IV, we talked about the thorny clashes between the reality of promises, and the reality of development. This installment picks up on some truths about working harder and going faster.

A: Wait a second... You never actually said we can't go faster. You only said that trying harder and adding people weren't the way.
B: That is true.
A: So we could possibly go faster?
B: Certainly.

Frankly, we don't know how fast developers might be able to go. I've had friends call and tell me about taking on work that was estimated and doing it in a morning when the code was well-factored, readable, and well-tested. My friend said that "all the functions I needed were already written and easy to find."

Robert Martin always said that the speed of today's development work mainly depends on the quality of the code you will be working on. Low-quality code? Low speed. High-quality code? High speed.

In addition, we've asked around and found that developers spend about 25% of their time in meetings (the more senior spend more time). That's 25% right off the top.

Of the time remaining, most developers are spending 60%-80% of their time fixing defects. That's a heck of a lot of time. If they could go two times slower, but produce no bugs, they would be at least 10% faster.

Of the time that's left, silos and approval processes and other interruptions force developers to spend a bit of their time waiting for other people. Work sits in queues.

And of course, when a programmer's work has to wait in a queue, the conscientious programmer keeps busy by pulling an extra task off the queue.

Now when work is returned from testing, or from an approver, or from some kind of review, it has to sit in a queue waiting for the developer to be available again. The average corporate programmer we asked has about 5 concurrent tasks (or branches) open. It can be as high as 15.

You can see the time is whittled away to near-nothingness. It's no surprise that a piece of work takes forever to develop, even though the developer is busy and has a lot of tasks in progress at any given time.

Developers say "I'm not blocked, I have plenty of other tasks to work on," and it's true. But the work itself is blocked. The work is not going to production soon enough because the developers are busy doing something else. This is why Lean advocates say "watch the baton, not the runner."  Flow efficiency is about the throughput of a system, not about the busyness of the workers.

It's a messy system, made messier by processes that include a lot of silos, individual effort, quick plate-emptying, approval cycles, branches, and what-not.

Most of our systems seem to do more software
prevention than software development.

But still, the quality of the code is the big determinant. If the code is well-organized, and good tooling is used, and the developer is familiar with the system, and the change doesn't introduce any major architectural rearrangements, ... well, how amazingly fast might it be?

Faster should be easily possible, but getting there may be hard human work, and not just for developers.

On the topic of hard human work:

A: So how can we get the developers to work harder?
B: Take away their tools. Add more bureaucracy. Use slow computers and dead programming languages.
A: And that will bring up the velocity?
B: No. You asked how to make their work harder.
 This highlights a problem in the approach to getting more done.  In Part II, we hinted at this. In 2009's Platitudes of Doom article, we went on about this at length. There is this assumption that the reason work is slow is that people aren't working hard enough.

The focus on business and effort is not helpful.

The above conversation is a bit tongue-in-cheek but illustrates the point that the problem isn't in busyness and effort expended, but in results being attained.   Most programmers are working far harder than they should have to and creating far less value in doing so.

If we insist on putting more effort into the work, we end up getting more brute-force, and not the best results we hope for. This will be discussed more whenever I get into writing up The Journey.
A: We want them to work harder so that we get more done.
B: If you want to accomplish more, shouldn't you make the work less hard?
A: We just want it done.
B: Of course, and so do they.
Here is the interesting magic. If we want people to get more done, we are more likely to succeed in removing the waste and uncertainty and risk from the process, rather than by putting greater effort and strain into it.

Again, in part II we talked about the formula of effort provided to do a task over effort required. Scroll down to the graphic that illustrates this point and the text that surrounds it.

 This snippet of the discussion is really just restating the points that you've already seen.  But there is some chance that you are not a long-time reader of this humble blog, out in the wastelands of blogger's dark corner, so maybe it will help to give you some more reading on the topic. To wit:


Maybe the programming isn't slow, considering the condition of the code base and the presence of risk and uncertainty in the requirements, the difficulty of predicting intellectual effort, etc.

Maybe the estimates are low, rather than the work being slow?
We don't really know. 

We only know that there exists a difference between what we expect (and want) and what we are really achieving. We can look at that as a betrayal, or we can look at it as a curiosity space.

I know which of those choices leads to blame and frustration, and which leads to alignment and improvement. 

But I'll let you ponder that until Part VI.

Monday, October 15, 2018

Q and A on Velocity, Part IV

In our last installment, part III, we talked about the reality of reality and contrasted that to the made-up-ness of schedules and estimates and promises.

Today we take a deeper look into one element of that dichotomy:

A: The schedule is a best-guess, but there is a real promise behind it.
B: How does that affect the rate at which things can be done?
A: Can I put more people on it?
B: Maybe, but cf Brook's Law

There is a reality to the made-up-ness of schedules, too. In this exchange, person A is reminding person B of the fact.

Promises are real. Trust is real. Keeping promises creates, sustains, and promotes trust. Real life organizations run on trust.  Plenty of business leaders (and one agile aquatic mammal) have spoken and written on this at length. One of my favorite authors on the subject is Stephen M. R. Covey (AKA "Covey the Lesser") with his book titled The Speed Of Trust. Also, see economist Ronald Coase's work, where he explains the lower transactional costs of trust.

And yet, for all of this, reality doesn't do what we want it to do. Things still also take as long as they take.

Rock, meet hard place.

If we were installing drywall or carpet, then the work would be partly skill and mostly effort. We could get more drywallers or more carpet-layers and we could do more rooms at the same time.  We might even be able to do one room at a time faster, provided on how well they coordinate.

But no. This is different. Programming is a chain of logic that is fed by and feeds into many other chains of logic. The parts are not as independent as we think, and not understanding the larger context (and the flaws in the parts in the larger context) will lead to many defects.

I once switched a bit of code from a bad way of doing work (building JSON strings by concatenation) to a more proper use of a JSON library, and took a system down for a little while. Sometimes the system depends on the work having been done the way it was -- the wrong way -- in order to function. Things stick together in a weird way.

Bringing in developers who don't know the way that the code works doesn't help. Mob programming or pair programming with people who know more about the different parts of the code certainly does help. Adding knowledgeable people to the effort does lead to a better result, but doesn't necessarily make the work go faster.

Brooks' Law is the observation that adding people to a late project makes it later. This has been studied and documented at length, and anyone wanting to try to accelerate a project by adding people should beware. It's not impossible to add people to a project, but often a flood of new workers who don't know the system will make it FAR worse.

These are inconvenient truths, which press us to look into more promising alternatives if we want to see work delivered to an all-to-real schedule.

Follow us to Part V

Friday, July 27, 2018

Q and A on Velocity, part III


In Part II, we talked about velocity and the (glossed-over) line about making it easier to get work done instead of pushing harder.  A was asking if changing the units used in estimating would help and, of course, it doesn't.

We pick up from there this week with a tiny snippet of conversation that touches on some big ideas:

A: Then how can I get my 30-point velocity?
B: What if there isn't a way? Maybe you need to need less?
A: But the schedule....!
B: The schedule is made up. How long it takes is real.

Software estimates are often wrong.

Quite often they're off by over 200% on specific items. There is a very good reason for that, and that is that we aren't given uniform standardized work items and a standardized process. Nor can we be. If the feature the client wants has been written already, we don't write it again.

Software developers don't repeat themselves. Each problem and each solution is (a little bit) unique.

As such, each estimate includes not only effort but also a risk (of breaking something) and uncertainty (need to learn something).

  • How long will it take you to learn how to do something you've not done before? 
  • What are the chances that adding this new feature will damage performance, suck memory, or complicate deployment? 
  • Have we discovered every "touch point" or is there some screen or report or calculation that needs to be modified because of the new data or procedure introduced by this change?
  • What is the likelihood of some other unintentional or unforeseen consequences, perhaps due to different configuration options? 
  • Can we easily understand the code that we will have to be changing, or will it take considerable effort to unravel the original intent and comprehend all the side effects? 
The risk element piles up greater and greater as a system grows. All new functionality, and especially functionality 'shoehorned' into place (quick and dirty hacks "to get it done fast") will increase the chance that something bad may happen in some dark corner of the code. Projects slow down over time due to this. 

When systems grow, and there is insufficient automated testing, then the ability of a developer to recognize that a new change has caused breakage is limited. 

Many software shops do a poor job of mitigating technical risks, and often blame them on "not having good enough programmers" when breakages inevitably occur.

Sometimes more stories are done than expected because the risks didn't materialize and the learning was easier than anticipated.

Other times, unexpected risks materialize and stories blow up to several times their original estimate. But the same amount of work is being done every week.

Estimates are made up. They were to someone's best guess about the amount of learning, the amount of typing, and the amount of risk that a story entails. Still, they can be off by 250% or more.

Schedules tend to be based on estimates, although quite often this is done backward so that the estimates are based on the schedule.  

The problem statement "we need 30 points per sprint" suggests that the original schedule and estimate were made based on a desire to have a particular thing finished by a particular time, believing that a high rate of accomplishment would be possible.

Here, where the rubber meets the road, the actual work is not happening as fast as the original scheduler hoped that it would. This leaves the manager in a tough spot since the actual rate of completion is about 2/3rds the anticipated/hoped rate of completion. 

There is a very real risk that a full 1/3 of the project cannot be completed on time. 

A smart software manager understands this, and so prioritizes so that the most important features are built early on and delivered soon. That way, when work has to be descoped, it is the least important work being dropped. 

This means not only sacrificing some stories but possibly the less-interesting parts of every story (cf Story Mapping and Example Mapping, Walking Skeletons, Evolutionary Design, etc). 

A smart team will find -- through feedback -- which parts of the project should not be built at all. Often people over-specify and over-design. Sometimes the presence of a simple solution reduces the need for a comprehensive one.  This is built into the agile manifesto (called "simplicity", defined as "maximizing the amount of work not done") and is crucial to all agile methods.

If we can need less, we can deliver sooner.
If we can prioritize well, we can sacrifice lower-value work.

Feedback from customers improves our prioritization and our ability to descope intelligently but requires us to release code to them early and often.

If developers are pushed to provide brute-force, quick-n-dirty solutions then it increases the cost of change due to more difficulty researching changes and more difficulty spotting breakages.

With enough risk pushed into the code, it may become intractable. The team may have to stop producing new features to clean it up. Nobody wants that. It is faster to write clean code with good automated tests to being with than to end up spending 80% of developers' time cleaning up messes and defects.

Which again brings us to the hard truth in this installment:

How long it actually takes is real. Estimates are made up. 

Blaming reality for not matching fantasy isn't useful, and blaming people for making up unrealistic schedules is not helpful. We are where we are. We have to move forward.

It leaves us with a bit of optimistic pessimism: if this work cannot be fully completed, what is the best possible release we can make given the resources, time, and scope at hand? 

See you back here for Part IV.