Posts

Do You Even Plan, Bro?

Aint No Plannin' I see a lot of "agile" teams and their "planning" meetings, but if you forgive me for saying so, I've not seen a planning meeting for a long time.  Typically: No plan is presented No questions about the plan are surfaced No revisions to a plan are made What actually happens then? Usually, someone loads up a list of tasks, and the entire meeting is spent assigning people to the tasks. Does that in ANY way resemble planning? What was planned, other than individual workloads? How was the plan assessed, improved, revised, or confirmed? What was, even, the plan?  So, what I have in mind is simple. Don't Assign In your next planning meeting do not assign any work to any individuals.  They can pull work as part of the sprint. They are cross-functional and self-managing, right? They are professionals you trust and respect, and have autonomy, so they can pull work.  If you are the PO/PM, then you have an ordered backlog for them. Show Your Plan P...

Working in Groups: Compromises or Contributions?

Some time ago, on a social media platform, during a discussion about technology, a pundit posted a piece arguing that teamwork was a horrible idea.   The Compromise Theory: His thesis is that an individual can have a great idea. When other people get involved, they have differing ideas. To settle the differences, the group has to make compromises.  Every compromise is a degradation of the original idea's purity. By definition, he said, every compromise is a second-best choice.  Eventually, the idea is so diluted and compromised that it is hardly worth implementing. The Contribution Theory: Cross-functional, diverse teams are well-documented and understood. They make better decisions, make rapid progress, and approach a problem from more angles.  The thesis here is that "two heads are better than one" for problem-solving, and that the work of an individual (rather than being perfect and pure) is likely to have more flaws and a narrower vision than the work of sev...

The Big Release Downward Spiral

The riskiness of a release is proportionate to the number of potentially untested paths in the new code. If there are a thousand new possible paths and we've only tested 4, then it's a significant risk to the installed base. If there are two, and we've tested both, then it's not risky at all. The larger a release (the more stockpiled functionality) and the less complete the characterisation via tests (automatic and manual), the more a company must grow bureaucracy and processes around releasing. The harder it becomes to push a release through all the bureaucratic gates, the more work gets stockpiled before release. This is the self-eating mechanism that CD dismantles. When a release may be only a couple of lines of code, the risk comes down. When it's one function, reached in only one way, then it's a limited set of paths. Saving up for a big release stockpiles pain and risk.

Considering Spec Driven Development

 People are making a big deal about the new way of agentic working: Spec-Driven!!! But, wait... Big Design Up Front (BDUF) is something we tried for many years, in many companies, many times.  It was proven to be a losing proposition and a bad idea. We did it in the 80s, and by 96 or so we had alternatives.  If the idea of spec-driven is to fully detail a system up-front, and then use agents to implement it, then I think we're about to repeat a historic mistake  But, wait... BDD and TDD are also specification-driven, just in very tight loops with constant verification. The test acts as a specification, and we implement code to make all the tests pass. We do this incrementally and iteratively, adding tests as we go.  If the idea of spec-driven is to iteratively and incrementally build software covered by the safety mechanism of tests, and the feedback loop is tight, then maybe we're about to repeat one of the best ideas of the 20th and 21st centuries. But, wait.....

Bash Variable Expansion

This is one of those things that just doesn't stick in my head, so I'm dropping this note to remind myself.  I often have trouble recalling cryptic variable names in bash, make, and perl. For my own sake, I thought I'd make a quick list of favorites and refer some sources.   Maybe you'll find this useful too.  Variable value, but exit if no such variable: ${var:?"error message"} Use default if   variable isn't set:  ${var:="default"} Get a substring : ${var: start: length}   Length: ${#var}  Uppercase:  ${var@U} Lowercase : ${var@L}   The first one is quite helpful when you expect a particular set of parameters and don't want to write if/then logic about each of them.   #/bin/env bash directory=${1:? You must provide a directory to search} pattern=${2:? You must provide a search term for filenames} find ${directory} -name ${pattern}   See more at the gnu shell expansion page. 

Does AI help us care less?

  A bit of background: I was chatting with some colleagues and friends this week about AI-augmented workflows.  M was talking about having LLMs "write the epics and features" for teams. He was envisioning a flow where a PO or PM would have much less cognitive effort and could do this work alone and then hand it off to the teams for execution. Of course, the teams are using LLMs to help with testing and code generation, and to explain code that should probably be refactored to make LLM explanations unnecessary. There are also AI code reviews, though they burn a lot of tokens for what they deliver. I was more cognisant that we should never (per Agile Manifesto) use tools or processes to reduce the interactions between humans doing the work, but might want to use them to improve or enhance those conversations. M talked about the really important conversations he wants to focus on: the discussion that happens when users and sponsors can see the working product increment and gi...

I can't test that, it uses STDOUT (Python)

You're working with some Python code, and would like to write a test, but... "I can't test that - it uses STDOUT!" Okay, well, that's really not such a big problem to handle. The solutions to this problem are actually simple enough that you can apply them to many other situations that might otherwise encourage you to skip tests just this one time (again). In our modern age, we have the advantage of ubiquitous LLMs in browsers, IDEs, and CLIs. We shouldn't be stuck on common problems anymore, nor must we remain unaware of features in our test libraries. Rather than look at print statements as a brick wall, you can get started writing useful tests right away. So, let's talk about a few options: Pytest "capsys" fixture. This is pretty trivial. You import and mention 'capsys' in your code, and you have access to the content printed to stdout and stderror.  These are what we would call "listener fakes" - they listen to, and remember,...