Thursday, June 14, 2018

Owner, Renter, Guest, Vandal?

What is your role in the code base you find yourself in today? 
I ask seriously because I'm learning that the answers matter. 
A lot.
An owner makes property improvements, upgrades, installations, replacements.  It belongs to them, and its value is important. An owner can paint, can put in a staircase, can replace the plumbing.
A renter tries to not make things worse and can make superficial improvements. Maybe they can paint or change out some things, but only with permission from the owner. Anything that is really wrong, that's on the owner.
A renter doesn't repave the drive, put in a deck, fix the drainage of the basement. It's not really theirs.
A guest is just there for the day; not really permitted to make changes at all. They can use the facilities that exist, but that's all. Could you imagine, "Hello people of Pisa. I just noticed your tower is crooked, so I fixed it."  Guests are just passing through.
A vandal doesn't care about the quality and state of the property at all. They are just there to put their personal mark on it and they hope to avoid the consequences of doing so. 


How do you treat the code you're in? Can you make the changes you want to make, so that it becomes more valuable? Can you make any substantive changes without permission? Do you just get the effect you want and get out before you're caught?
Is it a place you live, or a thing you wander into and out of sometimes?
There is another possibility: contractor.
If you're playing the part of a contractor then you're there to make a specific change on the behalf of the owner. 
The property isn't yours, and you're not invested in its value. You may take pride in your work and reputation and therefore refuse to do slapdash work. A great contractor acts in the owners' best interest.
On the other hand, you might be the budget option, the lowest bidder. There is a pretty small difference between a budget (slapdash) contractor and a vandal. 
Which are you? 
Which do you want to be? 
Which are you expected to be?
Does a misalignment in the expectation you have and the expectation others have of your role create problems?

Wednesday, June 13, 2018

Piecing it together without tests...

I wrote a thing yesterday…. it was in python, and I was using PyCharm (which is very nice, and a departure from using vim + sniffer).

It was something I didn’t know how to do initially, and couldn’t really figure out how to test drive.  I did it without pair- or mob-programming.

It was essentially a screen-scraping tool to dl the latest version of a zip file... that's not really interesting.

I had to learn or relearn some libraries I’d forgotten about, using docs and StackOverflow, and the “inspect element” feature of chrome quite heavily.  I whinged about the lack of ids and names on elements in web pages on Twitter, but eventually got it worked out.

Because I wasn't sure how to make it work to begin with, I used the ipython repl (via the 'Python Terminal' window in PyCharm. I would try various ways of locating the anchor link I wanted until I found one that I thought was far less-fragile than the other ways (such as some rather dubious XPath). I basically pasted the lines that worked into the source file.

I ended up using requests, BeautifulSoup, and urlparse to do the whole job. It was really simple once I figured it out.

When I was done, I had something pretty ugly and linear.  I refactored it into composed functions and re-ran it to test it.

At that point I realized that I’d not really written any tests for it — I was building it in the repl mostly, and then pasting lines to the source file.  It dawned on me that a lot of code is written this way -- tested using the actual servers (not mocks or local data), pieced together from documents and code snippets and trail-n-error, tested only in the sense of being test-run by hand, and done entirely alone.

Since I didn't have the disciplinary reinforcement of a programming partner, it was easy to fall into making something that wasn't driven through automated tests, just through "try and see."

As my Industrial Logic coaching partner Mike Rieser would say, I was "programming for effect only."

I want to be fair about this being a little throwaway utility. It will be easy to discard and rewrite or to fix if the page changes in an unfriendly way. It has composed functions. It's tiny -- less than 50 lines including vertical whitespace. Easily 90% of the work was just learning how.

But I didn't test drive it. I didn't use partners. I could easily have done both.

I'm realizing that Lewin's equation was right. My behavior is a function of both my personality and my environment, and by taking myself out of my normal environment my behavior was not at the same professional level as usual.

Something to think about. 

Tuesday, June 12, 2018


Those People

When you hear "those people are always complaining" then you're looking at a systemic problem. It's probably not a personality type; it's probably that the system in play disadvantages them constantly. It's easier to do victim-blaming, but maybe it's time to investigate.

The Fundamental Attribution Error is too easy to fall into, especially when it allows you to write off a whole group of people you don't know how to please.
This isn't a social justice thing (only), but it's an organizational thing. "Testers are all whiners", "the programmers are always grumpy", "middle managers are so brusque and rude", "ops people are jerks"
Which group "always complains" where you work... and have you ever looked at the systemic reasons for that?

I've heard leaders say "QA people are all whiners" when their way of working seemed to always bring testing professionals in at the end of a (late) project and then pressure them to rush through the testing so that the product can be released (almost) on time. The poor people have to learn an entirely new system, including all the configuration options, develop rich varieties of data scenarios, and put the system through its paces on outrageously short schedules and with no budget to try alternative tools and techniques which might make the work easier. "All whiners" because the company put them in a difficult situation time after time. 

Likewise, I've heard it of developers because they "just want to take a break" to clean up the code base that is becoming progressively more crufty, convoluted, and messy until it is significantly impeding their process and they accidentally inject unforeseen errors until over 70% of their time is trying to reproduce and resolve defects. They are told that they are lazy because they are not producing features as fast as before, and when they ask for some help to make the code less awful they get labeled "complainers."

I've heard it of executives. Of team managers. Of tech leads. Of middle management. Of product managers. Of sales. 

I'm especially concerned when the "whiners" are customer service and operations. Something is wrong when the people most directly involved in the users' lives are the ones in the pinch. 

Look for "the pinch."

He Is Just A Complainer

Some people do complain more than others.

And which individuals? Are they really getting an unfair deal? Are they speaking for their peers? What's really going on here? What if it's not "just personality" or "just that kind of person"?

Some people refuse to talk about problems. Sometimes they get used to the problem, and it's easier to just be quiet and endure it than to bring it up and be accused of being a "complainer type." 

Often it is people who don't take the job seriously who won't complain about it. A friend of mine used to have a part-time evening job. He just repeated the words "paid by the hour" over and over in his head as he did the most unpleasant and menial parts of that job. 

There is the class of people who are afraid to converse with their bosses and other people in power. They're trying to preserve their position, and the quality and speed of the work doesn't matter. They're not going to be the person who does the complaining; it would attract unwanted attention. Better to suffer in silence.

There is also a group of people who have learned that negativity doesn't help as much as positive influence, so they try to improve all their work with a positive attitude and small, local actions. They build alliances, they make suggestions. But they don't complain. These people are gold, but sometimes they're going to gloss over system problems that won't be solved with local changes and positive attitude. Big problems are above their pay grade.

Again, it's easy to fall into the Fundamental Attribution Error. It's easy to say that people are complaining because they're negative, fragile people. Especially if they are not of the same demographic as the one they're complaining to (but let's not go into that here).

I would think that people who are chronically, systemically disadvantaged *would* complain the most. That's perfectly reasonable.

It is possibly not because they're fragile, or weak, or privileged, or prima donnas. 
Maybe it's because the system treats them like second-class citizens.
Or because people write them off instead of listening.

Make the Symptoms Stop

Often effort is put into stopping the complaining. 

And maybe we don't like people complaining because we are (somewhat) at fault.

Maybe victim blaming is a way of keeping people in line; after all, everyone else puts up with it so why shouldn't they?

It's inconvenient and depressing to hear about problems. Maybe we can just hush them up and get on with our work. After all, they call it "work", right?

But when someone complains to you, they're bringing you a problem that they'd like to help solve, or to have solved for them. Isn't having fewer problems in the org a good thing? How important is it to perpetuate the problems?

Of course, there is this bizarre situation where people want to complain, but they don't actually want the problem solved. They'll fight you to keep the problem going, but they hate it just the same. Maybe the complaining is how they're trying to establish rapport?  It's very confusing to me. Maybe we should dig deeper here. There is something strange and cultural and human in it.

Culture of "Better"

When you hear "people are always complaining, just ignore them" -- is that a sign your company culture is not based on solving the problems it has (or creates)? Is it that awful for people to work with us? What can we do to make the struggles of our daily work either easier or more worthwhile?

How can we "make people awesome", "delivery value continuously", "make safety a prerequisite"? Can we experiment and learn to see what works? Where do we start to build a culture of making things better?

Friday, April 20, 2018

The world, joyfully shining from a desktop.

Sometimes this world gives me joy.

I take photos as I travel. I also try to take some time to see the world and take extra photos.  One day I realized that I could set my Mac to use random photos for the backgrounds. It seemed natural to create a folder called "backgrounds" on dropbox and use it. Over time I've moved a number of travel photos to that folder.

Today, as I was working I realized that my home office's three screens were showing NYC's Central Park, the view of Edinburgh from Arthur's Seat, and Colorado's Garden of the Gods.  As I was typing this, it changed to three more photos -- the pond in Central Park, a different view of Garden of the Gods, and the beautiful white cliffs of Dover in the UK overlooking the channel.

Through the day, when the backgrounds are not covered up in browser tabs, code editors, terminal windows, and social media apps, I can see my past adventures reflected in beautiful high-def.

I remember how much I love this world's beauty. It even distracts me from some of the ugliness and outrage-provoking I see in the course of my work and social media use.

I have to say to myself, "what a wonderful world" -- and I'm so lucky to have seen so much of it.

It's a dream that a runt of a kid from the Indiana farmlands with no particular skills or talents would have even considered possible. But he didn't know what he was capable of, or where it could take him. Probably still doesn't. But oh, what we've seen. Oh, where we've been.

This brings me joy. I pray it does you as well.

Friday, March 9, 2018

The Impossible Every Day

This is a slightly-edited re-presentation of a thread in Twitter. 

As twitter is ephemeral, and blogs are longer-lasting (and easy to point to in the future), I have copied it here and fixed a couple of typos that twitter wouldn't let me correct.

Once upon a time, people argued that TDD is impossible. You can't write a test first, obviously, because you can't test something that doesn't exist. It's obviously ludicrous.
But you change how you think about "a test" and it works just fine.

Likewise, two people working at one machine? Ludicrous! That's twice the cost! Except that when you realize that programming is more about thinking than typing it makes easy sense. It works just fine.

And mobbing! Why, how stupid must that be!!!! Having five or six people work on one task is horribly wasteful... until you learn that it gets work done quickly by putting the right number of brains and set of skills on the task. It works fine.

And evolutionary design! What kind of an idiot would think you should write code before you have a complete and detailed system design?! Except that you design in refactoring, which is a new way of thinking about design and it works just fine.

Self-organizing? How ludicrous! To think that coders, UX, testing, BAs, ops and other functions can choose their own work and coworkers task by task? Chaos! Except that people do it, and it works just fine.

LEAN STARTUP!? What madness is that? Building features and products and then switching customers and products -- not even knowing what you're building and who for? Except that you change how you think about "customers" and it works just fine and produces some successes (even $M).

NOT WRITING COMPREHENSIVE COMMENTS IN YOUR CODE? NOT FOLLOWING SINGLE-ENTRY, SINGLE-EXIT? Not wise. Except that it works just fine if you change how you write your code.

Running interpreted languages in production!? That will NEVER perform AT ALL!!! In VIRTUAL MACHINES!? That's moronic. Except that it works extremely well. It happens every day. It's fine.
Releasing monthly? No? WEEKLY? No way! No? DAILY!? That's going to be nothing but disaster. Except that orgs release hundreds of times per day, and if you change how you build and deliver, it works just fine.

Working without PROJECTS? That's surely mad. Except that product orgs do this all the time, no projects, no funding-by-project, no staffing-by-project; surely there will be no accountability!? Except it works just fine all over the world.

Letting customers see work that hasn't been finished and released yet?! Surely this is suicide! They'll be disappointed and angry, and will not work with us! Except that this kind of feedback loop is commonplace now, and it works just fine.

Working without BRANCHES? All the people mixing their code together all the time? It just makes sense that this will be a mess! Things will be constantly broken! Nobody will ever be able to release.  Except that people (in surprising numbers) are doing Trunk-Based Development and guess what? It works just fine. You have to change your thinking and process a bit, but it's okay.

Working without estimating all the work in advance and making promises and plans and budgets based on the estimates? Impossible! Except that people think differently about budgets and estimates and plans, and it seems to work just fine.

Nearly everything that vintage 1985 otter would have thought madness, foolishness, and impossibility has become not only possible but commonplace.

When people work together, nothing is beyond them.

You have to ask, what impossible thing is going to be done next?

I think we have to check our incredulity at the door.

As we used to think, none of this could have happened. We change how we think, and new vistas open up.

Who knows what is next?

Monday, March 5, 2018

Predictability as Maturity or System?

The predictability of a team is subject to the predictability of the work.

Duration of a task depends on three things primarily:
  1. Raw effort -- fairly predictable, measurable, repeatable -- easy stuff.
  2. Risk -- chance we might break something and have rework or damage; VERY hard to predict in advance of efforts, extremely hard to detect without significant effort in automated testing AND exploratory testing. This can bring us late delays that cannot be ignored. 
  3. Uncertainty -- the amount and difficulty of the learning we will do, along with the chance that we may hit dead ends and have to start over. Cursedly hard to predict even order of magnitude.

Interesting and Uninteresting Work

Work that is mostly raw effort is uninteresting. Nothing is learned there, nothing is innovated, nothing taxes or stretches the workers. It's mostly just typing. It is the vast minority of software work, because developers have a tendency to automate any uninteresting work. Not only does it save them a lot of time and tedium, but automating that work becomes interesting.

Work that involves risk and uncertainty is interesting work. It requires active minds and research and study. Easily 90% of all software work is interesting work. 

When you ask developers what they want to do, they will almost always tell you that they want to do interesting work. They want to take on hard problems and work them to completion, preferably with a minimum of distraction since risk and uncertainty will require their full attention. 

Developers will also talk about safety to innovate and try things that might fail. Again, these are natural parts of doing interesting work.


Phrasing the ability of a team to estimate accurately as "maturity" is unhelpful.  The likelihood is the opposite. You give more mature and skillful teams more interesting work, and thereby less predictable work. It's the immature teams that churn out identically-sized bits of raw effort per week and who may be able to put in extra hours without degrading the quality of the result. After all, you give them the uninteresting work.

If the work has high uncertainty and high risk, then the team could be composed of genius-dripping consummate professionals and the estimates will not be consistent.  


Organizations often expect teams to have consistent velocity and consistency in their SAY:DO ratio regarding estimates. That plays well with the idea of developers (coders, testers, ops, etc) as unskilled laborers in a simple or possibly complicated system.

If the work is interesting, and some work is more interesting than other work. As such, the velocities and estimates will be different among different teams because different teams do different work. We should respect this as natural variation, not condemn and eliminate it as special variation.

But, We Need Accurate Estimates!

You probably do. It is possible, even likely, that you've built a system of work around the idea that software is primarily typing, that developers are unskilled laborers, and that work is generally simple and occasionally complicated, but that the only real risk and uncertainty are in relation to the calendar. 

Do you need to need accurate estimates? Maybe you do. In that case, don't take on interesting work. Do only predictable things that have been done many times before, in a tried-and-true technology, for customers you understand very well. Don't complain about a lack of modernization, automation, innovation, or what-have-you. Keep it dull. You will probably lose some developers who like interesting work, but maybe you'll maintain a staff base of people who like trading n-hours-per-week for a paycheck, and those are the predictable ones anyway. 

But maybe you don't really need accurate estimates. Maybe you can learn to either do your work as an ongoing evolution of a software product using fixed staff and flexible scope. Maybe you can use story mapping and exception mapping and other story-splitting techniques to manage risk. Maybe you can have developers learn TDD and BDD and automated testing along with manual testing and coding. Maybe you can provide scheduled room for experimentation and dead-end mitigation. 

Possibly you don't need estimates at all but haven't considered what a different system it would have to be in order to stop relying on estimates. That's a topic for another day, but maybe you could do a bit of research into other ways of working.

If you are going to do interesting work, though, you can't insist on accurate estimates. You'll have to tune your process to allow for risks other than date-and-content risks by adding slack, testing, and support.

Either way, you probably want to make sure you invest in refactoring, so simple things don't become risky and thereby unpredictable. 

Your code should always be as easily-workable as easily-fixable as possible. That still counts.

The Inevitable

If the work is unpredictable, as evidenced by our history of poor estimation, then perhaps the lesson is that inaccuracy of estimates is not a case of maturity or insufficient effort.

If estimates have always been inaccurate, we have to accept that estimation error is (to us) inevitable.

If organizations all across the industry are also bemoaning poor estimates, then it's probably not just us.

So we treat it as normal and inevitable and keep looking for someone to come up with a better way.

It's how we've survived storms and market trends and other unpredictable elements for centuries; we accept it and allow for it.

If one or two "bad" estimates (ie estimates that turn out to not match actuals) will ruin a business plan and bad estimates are inevitable, then the business plan is fragile. If we can't have better estimates, we'll need more robust plans.

Wednesday, January 31, 2018

A little signal-to-noise

WARNING: the blogger "WYSIWYG" editor is really not very good about the "WYSIWYG" bit... so this article looks great in the editor but is a real crapshow in the actual post. I'm fixing it. Be kind, and bear with me.

In our eLearning, we publish problems and solutions. Sometimes people contribute other solutions and we show those as well. Today's sample comes from our Test-Driven Development album.

Album Art for Test-Driven Development

Geepaw hill tells us "everything matters" -- so today I'm going to nitpick at something that (in this case) is tiny and you might consider it insignificant. So be it.

But just the same, I would like to introduce you to a process that can improve your code and design in ways subtle and profound.

In this case, it stays a little to the subtle side, but that's okay for a blog.

Here is a source code example:

AreEqualWithPrecision(PhoneBill.calculateRate(PhoneBill.GOLD, 900, 1), 49.95); AreEqualWithPrecision(PhoneBill.calculateRate(PhoneBill.GOLD, 900, 2), 64.45); AreEqualWithPrecision(PhoneBill.calculateRate(PhoneBill.SILVER, 490, 1), 29.95);
AreEqualWithPrecision(PhoneBill.calculateRate(PhoneBill.SILVER, 490, 3), 72.95);

This test is checking a hypothetical phone billing calculation.

Let's look at these lines and figure out how much unique content exists per line.



56 characters of every line are duplicated. You probably didn't even read them after the first time.

GOLD, 900, 1), 49.95);
GOLD, 900, 2), 64.45);
SILVER, 490, 1), 29.95);
SILVER, 490, 3), 72.95);

About 25 characters are not the duplicated prefix.

Of these, even fewer are unique (if you drop punctuation).

With 2/3 of every line being noise, it's pretty obvious that this code is inviting you to copy and paste. Heck, it's practically demanding it.

How many times would you want to type those first 56 characters (plus indentation)?

Most of the time when people copy and paste, it's because the code asks them to do that.

I'm willing to wager a pleasant adult beverage that the four-line test was written by copying the first line three times.

If we were to get minimal noise, it might look like this:

SILVER 490 3 72.95

Now we've got it to four points of data, and that's pretty noiseless. Do you know what it means?

Nope. I didn't think so.

This has all the noise removed, but also all the information.

If you have near-zero signal, then having little noise doesn't help.

But if you have little signal, having a lot of noise doesn't make it any better either.

Find Significance

There is a violation of the fidelity rule here. The fidelity rule tells us this:
One reads the tests to understand the code. One does not read the code to understand what the test does.

The first three numbers describe facts about a simple phone bill.

public static double calculateRate(int plan, int minutes, int numLines)

The other is the expected amount of the calculation (here done in decimal because it's just a teaching example).

So when the plan is type=SILVER, and billing is for 3 lines and 490 minutes of use, the expected result is $72.95.

Now the question is how to phrase this. We are a little stymied because there are two different kinds of plans being tested here for two different conditions each. We're not going to come up with a test name that reflects that because it's a number of different ideas.

Maybe the tests are too big.

We could divide the tests into GOLD tests and SILVER tests. We could make four different tests.

This seems like a good idea since test naming is a classic way of making code make sense.

When we look at the code we see that the algorithm is the same regardless of plan. Only some numerical values change per plan. That's interesting.

Exalt the Significant

Possibly we could rework the code a bit. I'm going to take some liberties and not actually build and run this, but just examine some different organization.

var baseRate = 29.95;
var included=500;
var extraMinutesRate=0.54;
var extraLines=21.50;
var baseRate = 29.95;
var included=500;
var extraMinutesRate=0.54;
var extraLines=21.50;

var silver = new Plan(baseRate, extraLines, quota, extraMin);
assertEqual(72.95, silver.calculate(lines=3, minutes=490));

There is more to this, though.

  • The significance of 490 minutes is merely that it is less than 500.
  • The significance of 3 is that it's two more lines than the 1 included in the plan.
In this test, the extraMinutesRate is insignificant. It's a shame we have to provide it.

I'm not even going to talk about the primitive obsession, using floats for money, or any of the other obvious issues here.

Especially not having small classes for minutes, and for money, and type-safe function parameters to keep us from shooting ourselves in the foot via mishandling of variables.

Far be it from me to mention that. This is, after all, a training exercise.

Avoid Duplication

Now we're getting closer to something that can be understood from the test. The signal is increased considerably. That's a good thing. Sadly, these numbers are going to be all over the tests and duplicated in the production code.

That violates the Single Point Of Truth (SPOT) principle, and also damages our signal-to-noise ratio.

Now we'll have numbers all over the place duplicating numbers in other places, and we'll have to be careful to ensure that they all agree when they should.

Maybe what we need now is to create a record type to hold the variables for different rates. Let's call them GOLD_RATE_PACKAGE and SILVER_RATE_PACKAGE for now.

var silver = new Plan(SILVER_RATE_PACKAGE);
var underQuota = SILVER_RATE_PACKAGE.minutes_quota - 1;
AssertEqual( 29.95, silver.calculate(minutes=underQuota, lines=1));
AssertEqual( 72.95, silver.calculate(minutes=underQuota, lines=3));

This could be taken further, but consider this example v. the original.

AreEqualWithPrecision(PhoneBill.calculateRate(PhoneBill.SILVER, 490, 1), 29.95);
AreEqualWithPrecision(PhoneBill.calculateRate(PhoneBill.SILVER, 490, 3), 72.95);

On one hand, they are almost exactly the same. On the other hand, there is a huge difference in the signal-to-noise ratio and the places one has to look to research why the first answer should be 29.95.

So Friggin What?

The point of this is not "my code is better than yours" or "I'm cooler than you" (which is almost certainly false).

What I'm suggesting is that there are subtle-but-different changes even in simple code if we consider the signal-to-noise ratio in our code.
  • De-Noise-ify
  • Find and Exalt the Significant 
  • Avoid Duplication 

As a result, you end up with code that is more obvious at a glance and likely has a better design as well.

This matters to me because I care about the code rather deeply.

Maybe you don't like it as well as the original.

That's okay, but what do you come up with when you follow the same process?