Wednesday, December 11, 2024

Irresponsibility: Estimates and NoEstimates

Sometimes the argument is made that not estimating is irresponsible.

I understand why people say it. I don't necessarily agree. It is arguable that relying on developer estimates is at least as irresponsible.


There are interesting counterarguments that I'll summarize here.


But First, This:


Many arguments lean on "managers need accurate and precise estimates" but if those don't exist, it seems irresponsible to depend on them. Would you build your entire company's operating system on Unicorn Poop? That companies are in business despite needing more accuracy and precision than they get suggests that maybe the pinpoint accuracy isn't as important as we let on.


"Before buying a thing, I need to know the cost!" say some. Software development isn't like "buying a thing." It is a collaborative development project. It can't be what it isn't. It needs ongoing management involvement.


I don't expect the above paragraphs to put those arguments to rest. They will be brought out because if estimates were reliable it would make management much easier, and nobody wants management to be incredibly hard. Sadly, it's only wishful thinking.


George Dinwiddie's book suggests that we be empathetic toward the managers' needs and ask what they need the estimates for, how they plan to use them, and how we can help them. I like his approach and recommend his book.


The Responsibility Argument


So, on to the point suggested in the title: is estimation the responsible choice, and looking for other ways an irresponsible choice?

Can we balance these and compare them like-for-like?


Let's try.

  1. Wasting project time estimating and re-estimating (including detailing and estimating work that will never be done) is irresponsible. We see teams wasting many person-weeks per year (often more than 3 person-months) trying to estimate more precisely and accurately, without positive effect. 
  2. The estimate is not the duration. Getting people to shorten the estimate doesn't shorten the duration. Misunderstanding this leads to "sprint packing," which is an open invitation to cascading schedule failure.
  3. Summing inaccurate guesses won't add up to an accurate number especially when there is pressure to guess low. Pretending it will average out (when it never has) is unrealistic and irresponsible.
  4. You can't make your estimates more precise and accurate than the natural variation in the work. Even setting story points to half-days won't help with this. See Why Is This So Hard?
  5. Estimating harder and more often doesn't eliminate randomness and variation in the system. Rather than estimating harder, work on becoming more predictable (even if it doesn't seem faster).
  6. Predictability isn't always what you most need from your teams. Does it make the outcomes better, or only make managing easier? What would you surrender (in time and money) to improve predictability? Would you be willing to allow teams to work differently (perhaps mob programming or TDD) to improve predictability, knowing that it will slow them down in the short term?
  7. Proponents of #NoEstimates never claimed you could eliminate every kind of sizing & estimation. It's just a hashtag. It's about reducing estimation waste, but the hashtag is the name that stuck. "AHA! You are choosing a day's worth of work! That's estimating!" never was the sockdolager one might expect. Proponents take that easily in stride. They recommend forecasting and slicing the work thin -- it doesn't upset them that those are forms of estimation.
  8. Managers must work with teams to manage risks. They can't delegate risk management to developers via estimation. They cannot "quiet quit" management by reducing it to mere scheduling.
  9. Managing, again: "How long will it take you to do this in 8 months?" isn't a question. If the date's a given, the more important questions are "What do we do first?" and "What can we de-scope to hit the deadline?"


All methods that help teams, managers, and clients work together efficiently to produce a positive outcome without all the irresponsible wastes of typical estimate-heavy systems are welcome under the #NoEstimates banner.


This blog post is unlikely to convince anyone who is of a solid opinion, but maybe it's helpful to people who want to look at it from multiple angles and who don't feel threatened by the fact that this hashtag and its concepts exist, nor by the idea that it's not universally accepted. Be kind to each other. Don't refuse to care or refuse to listen. If you exercise "curiosity over judgment", there is no telling how far you will go. Maybe this article is an exercise in seeing things from a different point of view?

Tuesday, August 6, 2024

Basic Microphone Usage

How to use microphones (pay special attention to (3) and (12)!!!! :

  1. NEVER NEVER NEVER point your microphone at a speaker. Not even accidentally. Don't drop your hands so the mic points at the on-stage speakers. Don't walk in front of a loudspeaker while holding a mic. The feedback will be piercing and can damage the system (as well as the audience's hearing)!
  2. If you are afraid of feedback, hold the mic closer to your face. The further away you hold it, the more the sound person has to raise the sensitivity of the mic (making feedback more likely).
  3. Don't cup the head of the mic. Cupping causes feedback, for technical reasons -- just don't do it.
  4. It is okay to hear yourself; it means others can hear you. That's what the mic is for. Let the sound engineer make any necessary volume adjustments.
  5. Now that you are hearing yourself, you can adjust your pitch and pronunciation to sound better through the sound system.
  6. If you're singing, you will hear your own pitch better through the system than through the bones of your head. This will eliminate any delusions you have about your pitch, but it's a good thing. Imagine being off-pitch and NOT knowing it! Using the monitors is a skill you will pick up quickly.
  7. Point the mic at your tonsils. Hold the microphone like it's a glass of water and you are about to take a sip.
  8. For the sake of those who read lips, keep the mic just below your lower lip.
  9. If the mic is on a stand, try to follow the same advice. Either stand close to the mic or pull it closer to you, keep your mouth visible, listen, and adjust.
  10. Don't put your mouth on the mic, nor the mic in your mouth. Saliva on mics was always gross even before COVID.
  11. If you're going to shout, put a bit of distance between you and the mic. You want the system to have the same sound level. Shouting directly into the mic will cause a lot of distortion and noise, and nobody likes that. Don't worry, you'll sound like you're shouting even if it isn't actually louder.
  12. Don't blow into the mic (not with mouth or nose). Saliva can rust or damage the sensitive inner workings. It won't instantly fail the first time someone does it, but it will need replacement sooner if people blow in it.
  13. Don't tap the mic. The impact can damage the internal workings of the mic. It won't fail immediately, but it will eventually need replacement sooner.

Friday, May 24, 2024

Profitable Struggle and Unprofitable Struggle


Imagine I have to work in an unfamiliar and tough passage of code.

Unprofitable Code

I spend time and effort to puzzle it out, and all I get from it is that now I understand that tough passage.

If it was written in the most primitive way possible, using no interesting language or library features (which might have made it less puzzling) then it is unprofitable.  I have invested my time in "playing computer," simulating every line of code and tracking each value in my head, or perhaps I walked through every line and watched every variable in a debugger. 

I have expended this effort of will and focus but received little in return. I know what this bit of code is doing (for now) but I won't be any faster to write or read other code because of it.

That knowledge will be lost in months, possibly weeks or days.

The unprofitable code, left in its original form, will require as much effort next year as it did this time. It is a productivity speed bump.

Profitable Code

OTOH, if I dig into some unfamiliar and tough passage and learn some idioms, standard library functions, built-in syntax, or framework capabilities then I have been rewarded for my pains. 

This is profitable work; I know how to read and write similar code more fluently. It may save me minutes, hours, or even days of work. My investment will pay dividends for a long time.

People familiar with the idioms and method will move quickly through this code, and others will only have to suffer that initial shock of newness once. 

The Scouting Rule

I like the "leave it better than you found it" rule. 

I could rewrite the unprofitable code as profitable code if I have the time and there are enough tests to make refactoring or rewriting safe. 

Without the safety of tests, I wouldn't want to take the chance of breaking all the code that depends on this passage.

If I can't make it profitable, I can make the code less unprofitable.

  • I could add tests so that the next person to encounter this code has more safety than I had.
  • I could do safe refactorings (rename variables, extract expressions, extract methods) to make it less puzzling.
  • If all else fails, I can add some comments. 

Is This The Right Thing To Do?

That's all probably obvious and straightforward, but there is a more interesting question underlying all of this:

Does the team want to learn the language and libraries fluently, or are they striving to avoid learning how to use those?

Readability is present in the relationship between a code artifact and its audience; its ENTIRE audience.

I am an author or an editor, but that makes me a minority part of the audience of this source code. All my colleagues and future colleagues who share the codebase are also the audience.

If the last thing they want is to profit from this example then fluent use of the stack is a failing.

If becoming more competent is a goal, then avoiding fluent use is failing.

The important thing is not my writing, but their reading. 

And so.... what?

Since readability is a quality of a relationship between the artifact and its audience, I have some choices. 

We can change the artifact to meet the audience's requirements and make the code more readable without changing or improving the audience.

We can change the knowledge and comfort of the audience and thereby raise the readability of the code without changing the code in any way.

There is also a middle path: we can change the code and the audience's expectations. It does not have to be all one thing or all the other.

What is your strategy for code readability and developer competence?  




Tuesday, February 27, 2024

Definition-by-Dysfunction

 I've done it. You've seen me.

You've done it. I watched you do it.

We've probably argued about it.

The Defining Dysfunctions

I published a blog post some time ago on the Industrial Logic website about programming together vs programming under surveillance. It's a relatively simple piece, and it identifies a problem we have in the world when it comes to just about any technique or discipline.

When I suggested that people mistake group programming for working under surveillance, an incredulous reader exclaimed “How could it possibly be anything else!?”

So here's the thing: a person had a bad experience where instead of actually researching what pair programming is and how it works, they just sat down at a keyboard with another person and tried "doing pair programming" without any pre-study or preparation. They ended with one person bored, watching the other program.  

This is a widely-known dysfunction or "failure pattern" known as "Worker/Watcher." It's not how pair programming is done. 

The two people who had this one unpleasant, uninformed, wasteful experience came into it with little more than a sound bite ("two people coding together"), guessed at how it was done ("one person types"), and had a poor experience. 

Since they didn't start with a good definition of pair programming, that experience became the definition of pair programming. 

They had a defining dysfunction. 

Unless something new happens, they will forever see pair programming as wasteful and pointless practice.

Is that the real definition of pair programming? 

Is that what it's about and how it's done? 

Not remotely. but it is the one touch-point they have -- the one experience that they have had of it, and "pair programming" is the name of that experience now.

Do they want to try it again? Clearly not. They know what it is, and it's nonsense. 

They will likely take to social media to decry the BS that is pair programming and save everyone else from this wasteful and unprofitable behavior.

You and I have likely done the same. 

Give those complainers some credit, because at least they tried it (or something that they imagined was it) first.

Some Examples Might Be Useful Here

I ranted against scrum™for years, because I let the dysfunctions I routinely see become the definition of the process for me. If anyone were to actually try doing scrum™it would be a pretty good way of working, but nobody does.

Sadly, a lot of people in that space have adopted the defining dysfunctions. As far as they are concerned the "right way" to do scrum™ is to have a titled person assign individual work tickets to developers, who strive to serve the maximum number of tickets per fortnight to raise the velocity of the team -- striving to do "twice the work in half the time" (a soundbite). This isn't remotely what the defining document of the scrum™ method describes but it is what is often taught as "doing scrum."

Are you an agile hater because it's all Jira tickets, meetings, estimation, and work crammed in to meet artificial deadlines? I'd hate that too. I do hate that. It's just not agile. it's not even scrum™.

Do you hate agile because agile is "no documentation," "no design," and "no estimates"?  Well, that's a worthy distaste. It's also a mischaracterization.

Do you hate TDD because you have to write all the tests first, and you don't even know what shape the answer is going to take, so it's impossible? Well, that's a bad process, and it's not TDD. It's not the definition of TDD, it's just a failure mode.

Why Bother Trying?

Sometimes people don't even have to have a bad experience to adopt a defining dysfunction. They hear a sound bite or title and imagine dysfunctions. They (efficiently) go straight to disdaining the practice based on their imagined defining dysfunctions.

Do you think that Psychological Safety means you can't ever say anything that might possibly upset someone? Does Radical Candor mean that you can say whatever you want without consequence? Wrong, and wrong. Those are defining guesses at dysfunction

If we only guess at a discipline or a philosophy or a behavior and don't actually bother to investigate what it's intended to be, what it really means, and how people actually perform it then we don't have the basis for an honest opinion. If we have only experienced it as a bad attempt at a good idea we haven't formed a valid opinion. 

We're Too Smart For That!

Have you jumped to conclusions based on naive attempts or imagination alone?

I'm betting we both have.  It's a human thing, and we're all human. It doesn't much matter how smart or experienced you are -- you've done this. I might just be projecting, but I've seen too many examples to believe it to be less than universal. Please prove me wrong!

I'm willing to bet that you've done so this week. Let's both look out for these mistakes, because I'm betting that we could be more successful at many things if we took the time to understand them.



Listicle on Flow and Teamwork

Some article links related to solo vs group, flow, productivity, and predictability.

Monday, January 15, 2024

Fundamentally Wrong

The Problem

An article has been shared with me by several friends and also by some critics, and some people who are both, describing how TDD is fundamentally wrong, and doing test-after-development is better.

To be fair, the process described here is fundamentally wrong:



The problems with step #1 would indeed lead to the wasteful problems described below it. The recommendation here would certainly be better than the process described above:



TDD Is Fundamentally Wrong is Fundamentally Wrong


Now, the problem with this article is more fundamental than the problem being described.

TDD does not mean "Write all the tests then all the code"

It has never meant that.

That is not TDD.

That is some other misbegotten travesty that has no name.


This is the fifth or sixth time I've heard anyone describe TDD as writing all the tests first. In all cases except one, it has been described by people who self-describe as being anti-TDD, and who write articles decrying the foolishness that they identify as TDD (which is not TDD).

I have never seen anyone do TDD that way -- even unsuccessfully.  I have never seen anyone even try to do TDD that way. I would never sit by while someone tried to do that and called it TDD. That's simply not the process.

The one time that I read an article that actually recommended doing that, it was from a Microsoft publication early in the XP/Agile days. The public outcry was great and sudden, and the article was retracted. I don't think I've ever seen anyone else recommend that approach,  because that approach is so obviously flawed and was never the process used by original XP teams.

So What Is TDD?


TDD was originally described as a three-step dance that repeats over and over as one develops code.

You can take that straight from the horse's mouth (so to speak):



To these three steps, we (at Industrial Logic) added a fourth and final step.  Others may also have independently added this 4th step, I don't know.

  1. Write a test that does not pass, and in fact cannot pass, because the functionality it describes does not exist.  This failing test is the RED step, so-called because test-running programs generally produce the results of a failed test run colored red.
  2. Write the code that passes the test.  When the test passes, it is typical that the test-running program will present the results colored green, so this is often called the GREEN step. The code may not be optimal or beautiful, but it does (only) the thing the test(s) require it to do.
  3. Refactor the code and test so that both are readable, well-structured, clean, simple, and basically full of virtue (so far).  Refactoring requires the presence of tests, so this way we can refactor as soon as the code passes the tests, rather than waiting until after all the code for our feature/story/task is finished. We refactor very frequently.
  4. Integrate with the code base. This will include at least making a local commit. Most likely it will be a local comment and also a pull from the main branch (preferably git pull -r). More than half the time, it also includes a push to the shared branch so everyone else can benefit from our changes and detect any integration issues early.


 We repeat this cycle for the next test. The whole cycle may repeat 4-10 times an hour.


We do 1-2-3-4-1-2-3-1-2-3-4, we do not do 111111111-222222222-44444-(maybe someday)333.  These are not batched.

Was It A Misunderstanding of the List Method?

Some people misunderstood Kent Beck's List Method, in which you begin with a step 0 of writing down a list of the tests you think you will need to pass for your change to be successful (see the screen shot and link to Kent Beck's article). 

Note that you only make a list of tests. You do not write them all in code.

As you enter the TDD cycle, you take a test from the list. That may be the first test, the easiest test, the most essential test, or the most architecturally significant test. You follow the 4-step (or 3-step) dance as above. 

If you realize a test is unnecessary, you scratch it off the list. Don't write tests if the functionality they describe is already covered by an existing test.

As you make discoveries, you add new tests to the list. That discovery may lead you to scratch some unwritten tests off the list. That's normal.

Eventually, you will note that all the tests on your list have been scratched out. Either you implemented them, or you realized they're unnecessary. This applies to the tests you discovered as well as the original list.

You've done everything you can think of doing that is relevant to this task, so you must be done. This is doubly true if you have a partner also thinking with you, or even more certain if you have a whole ensemble cast working with you.

You never had to predict the full implementation of the features and write tests against that speculative future state.

It's a tight inner cycle, not a series of batches.

Do I disagree with the article, then?


Indeed, the "write all tests first" would only work for the most trivial and contrived practice example. It would never suffice in real work where 11/12ths of what we do is learning, reading, and thinking.

As far as the process being unworkable, I totally agree.

As far as that process being TDD, I totally disagree. 

That characterization of TDD is fundamentally wrong.


Friday, January 5, 2024

Python Listicle!

People often ask me (directly, or just generally posting to some social site) how they can learn Python quickly.  

Learning Python is one of those things where one can begin quite easily and quickly, but there is some depth to the language that one will want to understand and use once one gets past the most elementary early uses. 

If you are learning from tutorials, you might want to follow along in a REPL. You can try running Python locally (see ipython and/or bpython), as a Jupyter notebook, or in Repl.it if you want to keep your local machine Python-free for the time being).

You will probably want to install an IDE, though. There are many Python IDEs and Editors in the world, but PyCharm is the king of them all. Nothing else even comes close.

So, here are some great places to start:
  • Learn X In Y Minutes is great for experienced developers who are unfamiliar with the syntax and idioms of Python. It's all learn-by-example and is highly recommended for programmers who are exploring Python for the first time.
  • For less experienced developers, consider the official Beginner's Guide or the W3 Schools tutorial first.
  • Regardless of your level, you will want to bookmark the Official Docs which include reference material and tutorial material.
  • You will get a lot of good tips and deeper lessons from Arjan Codes on YouTube, or the many excellent lessons at Real Python. This is true whether you are an expert, intermediate, or beginner. There is a lot of content to explore, so don't try to take it all in over the course of a weekday.
  • A language without a great standard library is just a syntax. The Python Module of the Week gives some of the best in-depth exposition you can find. Definitely spend time there!
  • The Python community has created so many additional libraries at The Python Package Index. Here you can search, research, and learn about the many frameworks and libraries that make Python the best choice for so many jobs in real-world applications.
  • Every feature you'll use started as one of the Python Enhancement Proposals (PEPs). Python PEPs are to Python what the RFPs are for the internet and the worldwide web.  If you need to deeply understand a feature's purpose and intention, this is the place to go.
  • What's New In Python is a crucial resource for experienced developers to keep up with changes in the language. Besides being a bullet list of new features, there's some very good expository writing there and links to the relevant PEPs.

That is a lot, I know, but if you choose one of these resources according to your needs at the moment, I think you will be well satisfied.