Friday, September 10, 2021

FIRST: an idea that ran away from home

Quite some years ago, Brett Schuchert and I invented the acronym FIRST for micro-tests, though we called them "unit tests" (as was common at the time).

It’s grown a following and has been widely repeated. At this point, the idea just exists in the aether, and authorship isn’t often considered or cited. I suppose it has become "common knowledge" at least in some circles. It's usually not even given a citation, so I guess it's become a thing in its own right.

I’m still proud of Brett’s work and my small contribution to it. I'm glad it has taken on a life of its own, but I'm aware of when it's poorly described or when it's corrupted and am offended when people present it as their own unique work (or take praise for it, knowing that it is not original).

I get it, though. I'm sure there are many people whose work I didn't know how to credit, and whose work I may have likewise twisted to my own ends or interpreted in my own context whether I realized it or not. 

The acronym is pretty simple

  • FAST - you can run all your microtests so quickly that you never have to decide to run them later or now.
  • Isolated - tests don't rely upon either other in any way, including indirectly. Each test isolates one failure mode only. 
  • Repeatable - The test always gets the same result. This is largely a matter of good test setup and cleanup, but also the consideration of things like time of day, network status, database, global space, configuration, etc never being changed. 
  • Self-validating - a test is pass/fail. You never have to examine the input, output, or system state to determine if a test has passed. It is either green (passed), red (failed), or else the test did not run to completion (error), and it states the fact very clearly.
  • Timely - microtests are not to be written in batches, either before or after coding. They are written with the code, preferably just before adding a line or just after finishing a tiny change to a function.
Some people change the T to Thorough because they don't want to encourage TDD. 

I think this is a mistake. It's not "just as good" to have high coverage from a batch of tests written after the code is finished. It's not even nearly as good.

Done correctly, the tests inform and guide the code. 

  • They help us to consider our code first from the point of view of a user of the code, which results in more sensible and cohesive APIs. 
  • The tests make acceptance criteria a primary concern. If we don't know what result we are expecting, we can't write the test for it.
  • They make testability a primary concern: we can't write a test if we can't call the method or can't observe the result of making the method call. There is no "untestable code" if the tests come first.
  • As we realize we have corner cases, we add more tests, so that it's clear when we have covered the corner cases in the production code.
  • Because tests are written first, they are a kind of standalone documentation - you read the tests to understand the code. When tests are written after the code, they tend to take the structure and algorithms of the written code for granted: you must read the code to understand the test.
  • Whatever code passes the tests, that code is sufficient. The tests help us recognize a complete state.
  • Since the invariants for the production code are tested automatically, we can refactor the code to a different shape and design with the confidence of knowing that all of the tests still pass so we haven't violated any of our design invariants.
  • Each time our tests pass (run green) we are invited to reexamine the production code and the tests for readability and maintainability. This allows us to practice code craft continuously.
  • Because our intentions for the code are captured in the tests, we have externalized our intentions. We can be interrupted and quickly regain our context in the application - something that can't happen if we're in the middle of transcribing a large plan from our heads into the codebase.

Done post-facto, the tests have no way to influence the organization and expression of code in any meaningful way. Writing tests after the code in order to increase coverage becomes drudgery.

Where to find it.
Any quick google search will turn out dozens of articles. 


Wednesday, August 11, 2021

Changing Axioms

 (originally posted in march 2008, republished with edits)


I read a mailing list entry in which one fellow (who? I can’t remember!) asked another:

“Do you want to get better at what you’re doing, or find a better way to get the results you want?”


I’m a sucker for a good one-liner. That one had me thinking, and as I’ve had other conversations about innovation, I keep coming back to that line.

In many Agile practices, we work really hard for a week or two, and then hold a retrospective. The purpose of the retrospective is to find ways to work more effectively for the next two weeks

As we develop better software, we also evolve a better team. We may use “tricks” such as tracking our velocity and recording blockages on our ‘waste snake’ to provide data for our decisions, and we use gut feel to evaluate those things that feel like collateral effort to us.

If the practice works, we will see incremental improvement in the team. We will develop ways of avoiding special variations, and we will learn to accept our normal variations. 

It will make us better at the way we do things now.

However...

XP didn’t come from a series of incremental improvements to waterfall processes. 

I wasn’t there when it happened but it seems that they took on a change in axioms. They reimagined the development process.

They didn’t strengthen the contracts between groups but pulled all the decision-makers onto the same team.

They didn’t find more careful ways to preplan the code they were changing, but rather decided to lean radically on volumes of tests.

They didn’t build practices to improve their anticipatory design, they decided instead not to anticipate at all and simplify their design to allow future change. 

At the time, this was radical stuff.

I’m sure there have been many other less-successful process mutations, but there is no evolution without mutation.

The man behind the iPod, iPhone, and MacBookPro has had some less successful product ideas, too. Some exciting high-concept products didn’t make it in the wild. But then some new ideas become category killers.

How do we learn to make the axiomatic changes that lead us to radically better ways to get what we want?

Friday, August 6, 2021

Outliving The Great Variable Shortage

 


Originally Posted  on 2/26/2007

One of the more annoying problems in code, confounding readability and maintainability, frustrating test-writing, is that of the multidomain variable.

I suppose somebody forgot to clue me in to the Great Variable Shortage that is coming. I have seen people recycling variables to mean different things at different times in the program or different states of the containing object. I’ve witnessed magic negative values in variables that normally would contain a count (sometimes as indicators that there is no count, much as a NULL/nil/None would). I might be willing to tolerate this in programming languages where a null value is not present.

Yet I have seen some code spring from multidomain variables that made the code rather less than obvious. I think that it can be a much worse problem than tuple madness.

I have a rule that I stand by in OO and in database design, and that is that a variable should have a single, reasonable domain. It is either a flag or a counter. It is either an indicator or a measurement. It is never both, depending on the state of something else.

It is sort of like applying Curly’s law (Single Responsibility Principle) to variables. A variable should mean one thing, and one thing only. It should not mean one thing in one circumstance and carry a different value from a different domain some other time. It should not mean two things at once. It must not be both a floor polish and a dessert topping. It should mean One Thing and should mean it all of the time.

Surely I’ll take a shot from someone citing the value of a reduced footprint, and I won’t argue very long about the value of using only as much memory as you must. In C++ I was a happy advocate of bitfields. I haven’t been overly vocal about using various data-packing schemes, but I think that it can be a reasonable choice for compressing many values into a smaller space, but I will maintain that multipurpose variables are a bad idea, and will damage the readability of any body of code.

I suggest, in the case of constrained footprint where there truly is a Great Variable Shortage (GVS) that if (and that’s a big if) the author absolutely MUST repurpose variables on the fly that it is the lot of that programmer to make sure that users of the class/struct never have to know that it is being done. Never. Including those writing unit tests. The class will have to keep data-packing and variable-repurposing as a dirty secret.

Perhaps a strong statement or two in rule-form should be made here:

  1. A variable should have a SINGLE DOMAIN.
  2. One must NEVER GET CAUGHT repurposing a variable.

We don’t have to make do with the smallest number of variable names possible. Try to learn to live in plenty: use all the variables you want. For the sake of readability, consider having a single purpose for every variable not only at a given point in time but for the entire program you’re writing.

Monday, June 1, 2020

A Whole Lot Of Nope

In light of recent outrages, there are a lot of people posting bloodthirsty things on social media. If you post bloodthirsty things, I want you to know I don't stand with you on that. I may agree with your underlying cause(s) and reason(s). But I don't want people to be killed.

I don't want looters to be killed.
I don't want protesters to be killed.
I don't want suspects to be killed.
I don't want curfew-violators to be killed.
I don't want civilians to be killed by cops.
I don't want cops to be killed by civilians.
I don't want politicians killed.
I don't want community leaders to be killed.
I don't want bystanders to be killed.

I'm not claiming that all of these things are equivalent. The only equivalence is that I want all these people to go home at night, and justice to be accomplished without bloodshed. I'm not suggesting outrage is unfounded. I'm not saying nothing should be done.

I will suggest that if escalation and bloodthirst were really the answer, they probably would have worked by now.

Monday, May 4, 2020

Speech Hacks to be More Decisive

I was in a leadership program with Christoper Avery some years ago. In that program, people would say “don’t should on yourself.

Another friend had years ago told me that “should” is the saddest word in English because it means you see value in something, haven’t done it, and probably won’t.

I embarked on a quest to get rid of some aspects of self-defeating speech:

  • Instead of “I’m sorry” say “Thank you”
  • Instead of “No thanks, I can’t eat that” say “No thanks, I won’t eat that.”
  • Instead of “I don’t know how” say “I haven’t learned YET
  • Instead of “I should ” say “I may ” or  “I would like to “.

You know, these little things make a difference for me.

The speech patterns are more decisive and confident, reflect agency and choice, and generally help me avoid shame (in myself) and appearing uncertain or indecisive.

These are just little things, and it's not magic. One a person chooses an attitude they need to find language that supports that attitude.

My speech changes support the internal decisions I've made about how I want to address the world.

I am no psychologist so I can't say why it works for me or whether it will work for you. This isn't a prescription.

If you want to try some or all of these, then go to it. I didn't originate these ideas and you don't owe many any credit if it works wonders for you.  If it doesn't work for you, well, you were warned.

If you try these hacks, then please:
  • Try them for yourself and on yourself. 
  • Do not insist that people use these terms. 
  • Do not chastise or pillory other people for not using these terms.

Does changing a speech pattern really change a person's attitudes? Probably.

Are we wandering into political correctness and the euphemism treadmill right now?
Gosh, I hope not. I'm no expert on those and see both advantages and disadvantages in those things.

I don't feel the authority or invitation to lecture you on how you should/must/shouldn't/mustn't speak. I'm just offering some things you can choose to do or not.

I'm not saying you should.

Tuesday, March 17, 2020

Some Covid-19 Information

Hello all.

From time to time, I will post something that is not on-brand for Agile Otter Blog at all, but is of concern at a point in time or in celebration or mourning.

Today I'm listing some resources related to the Coronavirus COVID-19.

A quick warning: there is a thin line between being informed about the virus and being obsessed with it. You should have the facts, and these sites will help you. But don't be so obsessed that you can't think about anything else. Take precautions, follow guidelines, don't be fooled by myths, don't search for miracle home cures. Take care of yourself and your family, and try to continue living a productive and normal life.

I took time out to gather a few resources for you, but I have other things to do too. May we all get on with delivering value to each other and serving our communities of practice as well as protecting our friends and family.

Don't panic. Do take precautions as recommended.

Info and advice:
Myth Busters:

Stay healthy, and that includes not having an unhealthy morbid fascination with covid-19.


Thursday, February 27, 2020

The Laptop "Problem"

A few friends of mine have been pretty down on people using computing devices during meetings. The very presence of open laptops, tablets, phones, etc, gives them the impression that people are disinterested and disrespectful.

You might know the individuals in question, there are a few of them and they all have some pleasant association with me in the past, they are friends, and I'm not here to talk about them. I want to talk about Curiosity Over Judgment instead.

Corporate Training


A long time ago, when I was working for a company in the great midwest, we took a class on crucial conversations. Our instructor was from headquarters and so was clearly an Important Person. The topic was clearly a Very Important Corporate Topic to have flown someone in from headquarters and cancel a day of work to educate us.

My friend had an apple device that was his exclusive note-taking machine. If one is to take notes, it's best that they're collected together, and having them on his device meant he could organize and search them and also take his entire library of notes from place to place in a pocket. The device was an Apple Newton - this was well before the ubiquity of laptops and tablets.

Early in the course, the instructor walked to my friend's desk, tapped the device and said severely "Put this away."  My colleague tried to interject and explain, but the trainer would have no part of it. "Put it away now, please."

Realizing the hopelessness of his situation, my friend shut the Newton down and put it in his bag. Then he crossed his arms across his chest, sighed deeply, and checked out. I could tell he was not concentrating on the topic as much as fuming over the unfair treatment he'd just received.

The Important Corporate Topic was "crucial conversations."  Let that sink in.

The trainer who was presenting how to have crucial conversations just missed the greatest opportunity to practice and demonstrate the very tactic they were teaching us. Instead, a unilateral hard-correction command "Put this away."  No conversation about "I saw you were doing this, which led me to think that, what do you think, what will you do?" None.

Image result for apple newton

"Put this away now, please."

I approached the trainer during the next break because it allowed me to follow the rules and talk one-on-one to offer feedback.

"I noticed that you told Chris to put his device away."

"Yes, I don't need him distracted during this training. I can't believe he brought it in."

"Okay. I was thinking that maybe you didn't know what that device is and had misinterpreted it. Did you know that it's a note-taking device and he was keeping notes on the class? No? Okay, so what do you think about that?"

"I thought it was a game or something. I've never seen one of those."

"I thought that was the case. You didn't ask, either. You seemed intent on shutting it down."

"Well, I wanted him to pay attention."

"Do you know now that you prevented him from taking notes on your class? What do you want to do about that?"

"I'll take care of it."

"Thank you."

I just had my first "crucial conversations" type of talk, and it went well. I felt pretty good about it. I was waiting for the apology and explanation and change of behavior. To be honest, I was feeling both useful and a little smug as the instructor walked over to Chris's desk.

The instructor pointed at his computer bag, said "it's okay to use that now" and returned to the front of the class. No explanation, no apology, no nothing. As if it was not okay earlier, but now the class has reached a different point in the curriculum where it's okay -- no mistakes made, no assumptions, no flaws. I was dumbfounded, but at least Chris was able to take notes.

I vowed not to correct people based purely on my own assumptions.

Geepaw's Influence


I was co-teaching a class with Mike "Geepaw" Hill once. He was handling the first hour's introduction. 

He stated that one will get out of the class what one puts into it, and he expects that everyone will try to get as much as possible. 

However, he said, he expects everyone to be grownups and manage their attention appropriately and therefore he would not be policing the use of phones and computers in the room. 

He suggested that one could feel free to take notes, look up references, or whatever as much as necessary as long as they attended the classes and did their best. 

I had recently been co-teaching with another trainer who asked that all phones, tablets, and laptops be piled on a table at the back of the room. I remember feeling uncomfortable with what seemed a draconic measure to me, and wondered how students would take notes. I remembered Chris' Newton and felt bad, but didn't confront the very confident trainer at that time.

But now I was with Geepaw in Geepaw's classroom and I felt that what he was saying was right. We work with adults, they are tasked with learning, they are in charge, and there are reasons they may need a computer or tablet out. Some of them took notes, even. Some looked up references. Some were creating email lists of references and a summary of ideas for people who couldn't be there.

Some people were only allowed to be in the room under a manager's protest, because there were important things going on in their teams - releases, production crises, etc. That they were present in the room at all was a testament to their interest in the topic and their desire to learn even under these difficult circumstances. 

Rather than feeling competition from the devices, and disrespect from people using them, GeePaw gave respect and room to the attendees. I suspect they cared more because of it even if one or two of them may not have been paying full attention the whole time. 

Some Reasons You May Allow Devices In Meetings

Let's consider twitter again. Here are some of the reasons our tweeps have listed why people may have devices in meetings:
  • Note-taking (see Chris, above)
  • Remaining available to managers via text/chat/slack
  • Vision issues mean they can't see your materials except via screen sharing
  • They are struggling to attend your meeting even though other events pull at their time
  • They were invited in case they were needed, and are awaiting the need
  • They're researching (pulling up logs, databases, documents, source code)
  • They may be summarizing the meeting for people who want to be there, but can't be
  • They are asking questions of people who should be in the meeting, but aren't
  • What you didn't allow them to ask, they may be asking on a back-channel so that they can present a more complete idea to the room.
  • Neurodiverse people may need to "burn off" excessive mental energy -- using the device as a fidget cube (one US president did crossword puzzles during staff meetings while paying attention and asking probing questions)
  • Recording the meeting
  • Sketching or sketchnoting the meeting
  • Documenting the agreements of the meeting for dissemination.
  • Remaining available for some personal crisis (sick parent, children, pet, house sale, etc)
  • Fact-checking statements by meeting attendees.
  • Checking company policy where a violation seems likely
Face it, we can't tell a person's mental state by seeing them type into a computer from a distance. We are fooling ourselves when we think we can. 

Until we know for sure, we have only our assumptions.

Summary:


It is possible, of course, to take the presence of devices in a meeting or training as an affront to one's ego -- "they aren't paying attention to ME" -- or as discourteous disinterest. One is free to do so. Some people do so routinely. 

It is also possible that someone at your meeting is "untending" the meeting, paying no attention whatsoever and doing their daily work in your office instead of theirs: physically present but mentally remote. It's also possible that there's a reason for that. 

But this issue is a classic case where we can value curiosity over judgment. Maybe it's unfair of us to assume that the device indicates some particular mental attitude (generally one of disrespect) in the device users when we really don't know what they're doing or why. 

Its too easy to let ego defenses and Fundamental Attribution Error take over, as it did for our instructor (and I think for my friends who objected so strenuously, citing disrepect) but that may be as counter-productive and destructive to relationships when we do it as it is when my corporate instructor did it.

Maybe we ask first, judge second?

A little curiosity over judgment can go a long way, as can some Geepaw-style respect for the people we work with.