Wednesday, July 9, 2025

A quick note on "At Scale"

 i will submit that the problem of scale is mostly that you


  1. Have to deal with divergent, incompatible SW development paths
  2. Might not be able to handle losing fine-grained control of every element of the business.
  3. Are likely to accumulate heavy processes with a lot of inherent delay and unpredictability in them.
  4. Have to deal with "the law of the 2nd floor" (LO2F)

The Law of Two Floors

The first few points may be obvious enough to not need explanation, but too few people know the LO2F.

Nobody two levels ABOVE OR BELOW you on the org chart really knows what your days are like.
I think this may well be a proper "law" and not just observational comedy.

It doesn't happen in the small, scrappy startups where there aren't three levels on the whole org chart, or where the entire programming team can sit in the same conference room. It is a problem of scale.

You all have enough production work, customer management, finance, regulatory, and technical work every day. You can't really stay in touch.  You don't know everyone by face and name anymore. You've not had coffee together (maybe ever).

You will become an "abstract person" to people 2-3 levels away from you, AND THEY TO YOU. 

Don't think it's "poor me, they don't value me" because it's also "poor them, you don't value them." 

Both sides make up stories about what they think the others do and how they do it -- usually laced with self-protective suspicion.

"They just want to fire us to shore up their balance sheet."
"They don't care about the business or its people, just their quarterly bonus."
"Those programmers don't want to work -- they're probably goofing off right now."
"The testers don't really care about the product or the users, they're just holding up progress."

One might even be tempted to force-mold the real people into the abstract shape one thinks they should have.

Note the "everyone has to be the same" compliance drive, regardless of who people are or what kind of work they do.


Those are problems of scale. 

Perhaps the most painful problem of scale for software development is in divergence. You can read about that in my blog on easy integration (which is harder to do at scale).

Who (the heck) Am I?

I'm Tim Ottinger.

You may already know me.

I'm a long-time developer, agilist, XPer, CI/CD, teaming/ensemble, TDD, and general software delivery specialist.

I wrote the second chapter of Clean Code.  I am the originator and co-author of Agile in a Flash with Jeff Langr, and I wrote Use VIM Like a Pro.

I'm mentioned in the "acknowledgements" sections of many other books on Code Craft, Agility, Design Patterns, because I’ve been heavily involved in the tech community and have done reviews and edits of their pre-publication materials.

I worked with "Uncle Bob" Martin at Object Mentor (twice). With his company, I taught physicists at Stanford Linear Accelerator Center how to improve software design for high-energy physics. I worked with companies like Caterpillar, Xerox, and many others.

I worked with Ian Murdoch (who founded the Debian Project) in his company, Progeny Linux Systems. We created software to aid in the creation of custom commercial Linux distributions.

I built industrial balancing machines with ITW.

I have worked with Industrial Logic (Joshua Kerievsky's company) in the USA for the past 14 years. We helped companies create their flagship product. We worked with companies in oncology, heavy manufacturing, agriculture, insurance, respiratory health, and others. I’ve designed and delivered training courses. I’ve helped transform legacy code. I’ve coached software development executives.


I'm known as "the agile otter" (a play on my connections with XP and agile practices, and a play on my last name).


I have taught modern software practices and code craft to thousands of people around the world, including the USA, Canada, India, China, Germany, Hungary, Poland, Norway, Australia, and other countries. I enjoy international travel and love serving software organizations, whether large or small. 


I'm particularly well known for code craft, TDD, CI, CD, and refactoring.

In recent years, I've worked to bring Lean flow to organizations, helping them to reduce unpredictability, delay, and frustrations of typical delivery processes. This has been rewarding and fascinating, and will continue to make a big difference for the companies I’ve served..

I'm living in Scotland. I speak one human language and many programming languages.

Monday, July 7, 2025

How much rework do you WANT?

How much failure demand and rework do you feel is appropriate in your system?

Rework in software is the correction of unacceptable code. That doesn't include refactoring, which corrects only the design and expression of ideas without changing what the code does. This is limited to "something does not work correctly and we can't let it be released." 

So, we don't want any of that, right? It delays releases and costs money. It distracts developers from doing new feature work. It raises the costs of release. Clearly, this is a bad thing, right?

No.  This is a less-obvious question than it first seems. 

Obviously, nobody wants to have errors and failures in their system. We would like to see 100% success rates, right? 

Well, hold on...

Pass Rates

How would you feel if your QA/QC department did not report a single defect in the past 3 quarters? Not one!

What if all the code reviews passed with "Looks good. Nice work!" and nothing was ever returned for repair?

What if none of the security scans or linters ever reported a problem?

What if every one of your releases went out flawlessly and nobody ever reverted anything?

Would you actually be happy?

Would you be angry with code reviewers, testers, and your investment in scanners?

No Rejections? That's incredible!

The word incredible means that it has no credibility -- it's not trustworthy and likely untrue.

The problem with my phrasing above is that I expressed it as people not reporting problems, rather than expressing it as people doing diligent and competent work and still not finding any problems because the code is so perfect.

It's hard for people in most programming groups to even imagine that code could not be full of easily-catchable issues (and probably some subtle ones that aren't easy to spot).

But this leaves us in a funny picture: we need the code to have problems in order to prove the value of the gauntlet of post-code checkers we have put in place. How else can we prove the efficacy of the system that keeps bugs from escaping into the wild?

Competition

The usual system pits developers against testers (human and otherwise) and requires there to be a certain level of rework. For every bit of code that passes 100% on the first try, we give a demerit to our testing gauntlet. For every time code is rejected by the system, we give a demerit to the programming staff.

Somebody has to lose for the system to remain stable. 

When the code improves, the test gauntlet has to raise the bar until they can report more issues (however trivial) in order to prove the value of their system and the need for the salaries and subscriptions paid periodically.

That creates a reputation that programmers are ineffective and flawed. If they again improve, then it casts shade on the gauntlet operators again.

Isn't this "using blame and shame to create an upward spiral"? 

Does it make a difference?


Here is a pipeline based on some factual post-code pipelines:

Looks pretty good, right? Pretty high pass rates?

The overall pass rate is only 58%. 

Let's run some 100-task simulations. Ideally, each step would run once per task. When a code submission is rejected, it starts back over at the beginning.

The simulation is unfairly simple. 
  • It doesn't include time to remediate the problems. 
  • It doesn't consider queuing waits, such as when the QA is busy, and work piles up. 


With this pipeline, 180 extra hours (rework) are needed just to operate the quality gates. That doesn't include programming or queuing time. 

Costs


Intentionally driving down the pass rates (making the gates "extra tough") creates an appetite for rework and failure, and a kind of informal currency around it. One earns points for slowing the system, increasing the costs of work, delaying releases, and destroying the predictability of any work in progress.

What if we kept a high standard, and even raised the standard, but rather than using this to return work and raise costs, we focused on the developer experience and made it possible (possibly inevitable) that the developers would produce code that works and satisfies users?

Deming described the "quality control" function as "you burn the toast, and I'll scrape it" -- rework that costs money and time and cannot possibly be the path to total quality.

Shifting


The focus on "shift left" for QA is to move more of the evaluative function to moment of code creation.

This is wisdom.

There are many ways to approach development without rework.

With techniques like (proper use of) static typing and design-by-contract, we move more of the quality responsibility to compile time. Even dynamically-typed language often have a type-checking extension and these can make it hard to write ill-formed code witout knowing.

Most IDEs have some level of code-checking built into them. While some programmers are highly cognizant of warnings, errors, and suggestions there are many others who ignore these helpful quality reminders. Admittedly, not all warnings in all IDEs are useful, but that doesn't mean that they are all to be ignored. 

There are a number of "lint" utilities that can be installed in the IDEs and editors we use.  These can spot poor usage patterns, clumsy expressions, possible security flaws, anti-patterns and code smells, and unconventional formatting issues. Again, some developers refuse to install these and others ignore them.

Practices like TDD exist to provide a behavioral level of testing that goes beyond what a linter or IDE can check. With the microtests TDD relies upon, you don't just know that the function takes two integers and returns a floating point number; you can know that given a certain set of numbers, it will return the expected result. This is an extension of quality checking that has a lot of power: you can change how the code works provided it still does the right thing. This allows one to evolve the design and improve the expression of code ideas safely. It's one of the few ways of working that does (since DesignByContract is rarely implemented).

Automated testing helps here also. Using tools like Playwright and Storybook can take a UI development to a new level of reliability.

But the biggest left-shift seems to be one that people are in a huge hurry to discount - working together. 

I'm not sure why a proven method of enabling CI/CD and high quality, lauded in many studies and publications (including the DORA report) seems to have such a curb appeal problem, but people don't want to think about this one. 

It matters, though. Nobody knows all the grammar of all the programming languages they use every day. Nobody knows the entire standard library of any one of those languages, nor all the relevant standards. Nobody knows all of the application code. No one person can be expected to know and keep up with an application system and its ecosystem.  A few people working together can get close enough, though. You might be amazed at the Venn diagram of who knows what and how little common space there is in solo programmers all "doing their own work."  You would be amazed how much ground is covered by a diverse group of skilled workers.

But of course, if you don't have errors and rework so that all of your code goes out the door well and quickly and your customers aren't waiting and yelling at you then how can you tell which of your people are doing their best work? 

And does that matter when your company is succeeding?

Monday, March 3, 2025

There is No Automatic Reset for Engineering

Do you remember all those rushed changes that your developers implemented three years ago, and how they complained about the design damage they caused to make that happen?
It's all still in the codebase. It never disappears.
You may have forgotten it, but they still live with it every day.

I'm not saying you were wrong to be in a hurry then; I'm only saying it's not over

It Does Not Heal Itself

In engineering, software or otherwise, whatever decision we make this month, we have to live with it from now on, or until someone invests additional time in reversing it.

That hack is as permanent a part of the product as any well-considered change. 

It doesn't refresh with the next month, quarter, or planning period. it doesn't fade away. It doesn't heal.

There is no end-of-period reset. 

Making new goals doesn't remove the baggage of prior goals.

Do the other people have to live with January 2013 for the rest of their lives? Or is it only engineering that has to deal with every dirty hack since the beginning of the organization?

I wonder if non-engineering parts of a company properly recognize this "code is forever" situation.

Hurrying Is Sometimes Wise

That doesn't mean we should never hurry or not be quick; we all want to produce a quick and meaningful effect, but if we get there by cutting corners and producing risky code, we need to productize it (to turn the working prototype into something that we expect people to trust in production).

It's good to borrow time from the future to capture a market opportunity, but we owe the future that time; we can't borrow freely forever as if it costs us nothing.

Incremental and iterative development is a good thing. It's reasonable that we adjust our product intentions to keep up with changing market conditions if we can. 

The problem is rushing weak code out the door frequently and often.

A continuous stream of hacks we must live with forever will destroy our ability to produce quickly and may impact our ability to build and operate the system cost-effectively.

It Can Be Fixed / Managed

Some ongoing "sustaining" and "accelerating" work is needed to drive down the cost and drag of past decisions. The "accelerating" work also sets up the product to scale and manage runtime costs. 

You can have some success by intentionally working in a "portfolio" way - keeping all the important work going while shifting the onus to the more appropriate category and letting engineering work out the details.

Imagine you could set the percentages (with engineering) for this kind of portfolio:

  • Advancing: Adding new features and UX improvements - all customer-facing.
  • Sustaining: Solving defects, doing support work, automated testing, and refactoring. Without these, the product either cannot run (certificates renewals), won't run well (defects), can't be sure to work (testing), or may suffer past design damage indefinitely.
  • Accelerating: doing work to increase development and validation speed so you can do more of the other two kinds of work successfully and quickly.

Perhaps now is a good time to put 20% more effort into sustaining, to ensure that your existing customers will renew their subscriptions.

Alternatively, this may be a good time to put 30% more effort into new features because a trade show or sales push is coming up.

Are we frustrated with the slowing pace of work? Maybe we should put 40% of our effort into accelerating for a few weeks, then bring it back down to 10% or 15% of the total workload.

Are we getting bad press from defects? Perhaps we can reduce the accelerating work to 10% and put 50% into support and 40% into advancing features. 

This doesn't need a lot of close supervision from outside engineering, just a good sense of what the product needs to move forward. There are ways for engineering groups to regroup and self-organize to meet goals.

There are many ways to handle this kind of problem, and I'd love to hear how yours works and how you keep your balance as an engineering organization.

Ecocycle Planning (Liberating Structures)

At this point, I'm just relating what others have told me; I've not engaged in ecocycle planning yet, but it seems a brilliant idea. Since it is a part of Liberating Structures, I don't doubt that many other people have done so. 

Take a look at the EP process, as it provides additional business context to the planning of development work. 

Joshua Kerievsky is working on (and writing about) the business contexts of technical work. I hope to provide links to his articles in my blog as well as in social media as it becomes available for public consumption.




Tuesday, February 11, 2025

Pair Programming Listicle

There are many ways to collaborate, and pair programming is the first many people consider. 

It's a useful practice, though it can be emotionally intense compared to other ways of work like mob programming AKA teaming AKA ensemble, or swarming. It's easier for people like me to work with three partners than only one.

The fact that other ways exist in no way diminishes the value of pair programming, and if you aren't able or aren't allowed to go to a more inclusive collaborative technique, pair programming is a great way to work.

That is, if you're doing it correctly and don't fall into the key dysfunctions...

Here are the resources:


BASICS

TIPS FOR PRACTITIONERS


DYSFUNCTIONS


BENEFITS

This is just a start. Sometimes we ship and then refactor.


Tool vendor Tuple also has a listicle on pair programming. You might find it useful.

Feel free to suggest other links that you have found helpful, funny, or illuminating.

Also, remember that we teach these skills at Industrial Logic  along with many other technical, management, and UX skills. We're here to help you be successful. Give us a call.

Wednesday, December 11, 2024

Irresponsibility: Estimates and NoEstimates

Sometimes the argument is made that not estimating is irresponsible.

I understand why people say it. I don't necessarily agree. It is arguable that relying on developer estimates is at least as irresponsible.


There are interesting counterarguments that I'll summarize here.


But First, This:


Many arguments lean on "managers need accurate and precise estimates" but if those don't exist, it seems irresponsible to depend on them. Would you build your entire company's operating system on Unicorn Poop? That companies are in business despite needing more accuracy and precision than they get suggests that maybe the pinpoint accuracy isn't as important as we let on.


"Before buying a thing, I need to know the cost!" say some. Software development isn't like "buying a thing." It is a collaborative development project. It can't be what it isn't. It needs ongoing management involvement.


I don't expect the above paragraphs to put those arguments to rest. They will be brought out because if estimates were reliable it would make management much easier, and nobody wants management to be incredibly hard. Sadly, it's only wishful thinking.


George Dinwiddie's book suggests that we be empathetic toward the managers' needs and ask what they need the estimates for, how they plan to use them, and how we can help them. I like his approach and recommend his book.


The Responsibility Argument


So, on to the point suggested in the title: is estimation the responsible choice, and looking for other ways an irresponsible choice?

Can we balance these and compare them like-for-like?


Let's try.

  1. Wasting project time estimating and re-estimating (including detailing and estimating work that will never be done) is irresponsible. We see teams wasting many person-weeks per year (often more than 3 person-months) trying to estimate more precisely and accurately, without positive effect. 
  2. The estimate is not the duration. Getting people to shorten the estimate doesn't shorten the duration. Misunderstanding this leads to "sprint packing," which is an open invitation to cascading schedule failure.
  3. Summing inaccurate guesses won't add up to an accurate number especially when there is pressure to guess low. Pretending it will average out (when it never has) is unrealistic and irresponsible.
  4. You can't make your estimates more precise and accurate than the natural variation in the work. Even setting story points to half-days won't help with this. See Why Is This So Hard?
  5. Estimating harder and more often doesn't eliminate randomness and variation in the system. Rather than estimating harder, work on becoming more predictable (even if it doesn't seem faster).
  6. Predictability isn't always what you most need from your teams. Does it make the outcomes better, or only make managing easier? What would you surrender (in time and money) to improve predictability? Would you be willing to allow teams to work differently (perhaps mob programming or TDD) to improve predictability, knowing that it will slow them down in the short term?
  7. Proponents of #NoEstimates never claimed you could eliminate every kind of sizing & estimation. It's just a hashtag. It's about reducing estimation waste, but the hashtag is the name that stuck. "AHA! You are choosing a day's worth of work! That's estimating!" never was the sockdolager one might expect. Proponents take that easily in stride. They recommend forecasting and slicing the work thin -- it doesn't upset them that those are forms of estimation.
  8. Managers must work with teams to manage risks. They can't delegate risk management to developers via estimation. They cannot "quiet quit" management by reducing it to mere scheduling.
  9. Managing, again: "How long will it take you to do this in 8 months?" isn't a question. If the date's a given, the more important questions are "What do we do first?" and "What can we de-scope to hit the deadline?"


All methods that help teams, managers, and clients work together efficiently to produce a positive outcome without all the irresponsible wastes of typical estimate-heavy systems are welcome under the #NoEstimates banner.


This blog post is unlikely to convince anyone who is of a solid opinion, but maybe it's helpful to people who want to look at it from multiple angles and who don't feel threatened by the fact that this hashtag and its concepts exist, nor by the idea that it's not universally accepted. Be kind to each other. Don't refuse to care or refuse to listen. If you exercise "curiosity over judgment", there is no telling how far you will go. Maybe this article is an exercise in seeing things from a different point of view?

Tuesday, August 6, 2024

Basic Microphone Usage

How to use microphones (pay special attention to (3) and (12)!!!! :

  1. NEVER NEVER NEVER point your microphone at a speaker. Not even accidentally. Don't drop your hands so the mic points at the on-stage speakers. Don't walk in front of a loudspeaker while holding a mic. The feedback will be piercing and can damage the system (as well as the audience's hearing)!
  2. If you are afraid of feedback, hold the mic closer to your face. The further away you hold it, the more the sound person has to raise the sensitivity of the mic (making feedback more likely).
  3. Don't cup the head of the mic. Cupping causes feedback, for technical reasons -- just don't do it.
  4. It is okay to hear yourself; it means others can hear you. That's what the mic is for. Let the sound engineer make any necessary volume adjustments.
  5. Now that you are hearing yourself, you can adjust your pitch and pronunciation to sound better through the sound system.
  6. If you're singing, you will hear your own pitch better through the system than through the bones of your head. This will eliminate any delusions you have about your pitch, but it's a good thing. Imagine being off-pitch and NOT knowing it! Using the monitors is a skill you will pick up quickly.
  7. Point the mic at your tonsils. Hold the microphone like it's a glass of water and you are about to take a sip.
  8. For the sake of those who read lips, keep the mic just below your lower lip.
  9. If the mic is on a stand, try to follow the same advice. Either stand close to the mic or pull it closer to you, keep your mouth visible, listen, and adjust.
  10. Don't put your mouth on the mic, nor the mic in your mouth. Saliva on mics was always gross even before COVID.
  11. If you're going to shout, put a bit of distance between you and the mic. You want the system to have the same sound level. Shouting directly into the mic will cause a lot of distortion and noise, and nobody likes that. Don't worry, you'll sound like you're shouting even if it isn't actually louder.
  12. Don't blow into the mic (not with mouth or nose). Saliva can rust or damage the sensitive inner workings. It won't instantly fail the first time someone does it, but it will need replacement sooner if people blow in it.
  13. Don't tap the mic. The impact can damage the internal workings of the mic. It won't fail immediately, but it will eventually need replacement sooner.