Thursday, April 18, 2019

Improve Your Unit Testing 12 Ways

Everyone these days seems to understand that having unit tests are an important part of the work of developing software. Yet, people struggle with the practice. Here are 12 ways you can immediately improve your unit tests (and at the bottom, a few ways to go well beyond these 12 rules). 
  1. Select assertions to provide clear error messages.
    Assert.That(x.Equals(y), Is.True) is not it.
    Try Assert.That(x, Equals(y)).
    See the difference in how the test tool explains the failure.
    If the failures aren't clear, add a text message to the assertion.
  2. Go for Maximum Clarity
    The rule here is "you read the tests to understand the code, you should never have to read the code in order to understand the tests."
    The tests need to be better documentation than the code or its comments.
    When this is not true, it's generally because the code isn't doing too many things in a given function. When code is clear and simple, the tests can be also.
  3. Watch Out For Side Effects 
    There's nothing more puzzling than finding a test that sets a flag to false, calls a function with no parameters and then checks the text of some string that was never mentioned in the fixture or test setup.
    This indicates that your test is testing side-effects.
    This, in turn, indicates that your functions are being used for their side effects rather than their direct effects -- a sign your signal-to-noise ratio and your cohesion are low.
    When these tests are hard to write, it's because the code is hard to understand. 
  4. Let namespaces work for you.
    Module.namespace.class.function is the full name of your test. If you can introduce a new namespace to group a bunch of related tests, and doing so will make the test method names stand out more clearly, then do so.
  5. Disconnect your database.If the logic you're testing is not written in SQL then don't test SQL.
    Tests that run against shared databases are notorious for creating false positives when data changes in ways that tests didn't take into consideration.
    It is better to not use a database if possible or to ensure the test has its own unique and isolated instance if a database MUST be used,
    Better yet, disconnect tests from databases altogether. Mocks are your friend. You can create interfaces or mock existing interface so that the tests run faster and intended results are obvious.
  6. Avoid magic strings, magic numbers, other magic. It is often in writing tests that I realize I need an enum, const, class, or what-have-you.
  7. Single Responsibility Principle applies to tests.
    You want tests to isolate failures, not conglomerate them. A test should fail for one reason only, and that reason is the purpose of the test.
    Sometimes you can't avoid some collateral influence, but most of the time you can avoid most of it. The more single-focused the test is, the easier it is to understand and maintain.
  8. Refactor your tests.
     Eliminating duplication and structuring the tests so you can find them is not a waste of time. Test utility packages are very helpful and can reduce the size and complexity of your tests.
  9. Ease up on test-class-per-production-class.
    Try writing one test class per testing scenario, even if it means many test classes per production function.
    A test class provides a shared setup. That suggests that a test class should correspond to some initial condition in which various activities are expected to have context-specific results.
    Therefore a test class describes some system state, from the point of view of the code that is being tested.
    Maybe many test classes per production class can be better understood this way, and trying to force that into test class per production class is a mistake.
    Perhaps test classes should have Friends' Naming, such as "TheOneWhereRossBuysADuck."
  10. Consider the system of names used in tests.
    Compare test names to each other, to the class that contains them, to the context in which they operate. Try to make the names reflect the domain and the situation.
    And, particularly, make the name of the test match well with the assertion, because when the test fails people will typically see the name of the test and the assertion message first; they probably shouldn't have to read the file and the text of the test to know what mistake they just made!
  11. Write the Test First
    If you have written the code first, you will write the tests according to what the code does, from the point of view of the coder who just wrote the code.
    If you write the tests first, then you are beginning as a user of the code, and the tests are helping you define a usable and reasonable API. This results in tests and code that are more readable and more usable.
    This outside-in progression tends to produce better tests and better code, where "better" means that it is harder to misunderstand. Misunderstanding the purpose and effect of code in maintenance tends to introduce defects.
  12. Remember the High-Fidelity Rule
    A critical rule of testing is that the code being tested has no idea whatsoever that it is being tested. If the code checks to see if it is running in a test, whatsoever, even once, even in a deeply-buried conditional corner case, then the code is not reliable. You haven't checked to see what it will do in production -- you've written code to pass a test only, not to work even though it is in a test.
If you are trying TDD, you should probably skip right to the FIRST qualities of microtests

Tuesday, March 26, 2019

Software Development as Buying-A-Thing Or Co-Creating?

"When will it all be completed" is a question that many people can't imagine not asking. 

"When will we start making money from this?" is maybe a better question. 
"Is this something worth investing more money and time in?" is maybe better.
But people are all hung up on software development as "buying a thing" instead of "growing a revenue stream" or "developing an audience of raving fans." 
If you're "buying a thing" then "what will it look like and when will it be done" seem like the reasonable questions.
It's a mindset thing. It's reinforced by myriad business practices and internal policies which limit the thinking to "buying a finished thing" and this keeps other ideas from taking root. 
"If I were buying a car," "if I were paying you to build a deck," etc.
So, maybe the brain-stretcher of the day: 
What if software development is nothing like buying a thing but is a partnership to build a new business endeavor? 
Custom software development can be an invitation to co-create a new product, service, or offering. It could be focused on creating a new customer base and revenue stream. It could be about working together to do good in the world. It could be something very different from "buying a thing." 

How would this change your mindset, your questions, your approach?

(originally posted on twitter)

Monday, March 11, 2019

Signal-to-Noise: The Workspace Application

I have seen people who think open spaces rock, and people declaring them “the worst possible layout.”

Some people like offices (find them “essential”) and others hate them.

I met people who liked their cubicles, while most find them soulless and dehumanizing.

Most collocated teams like sitting in “pods” but some don’t.

I have a theory rather than a great design. My theory begins “it depends.” It only becomes interested when we get to “depends on what”.

So, there is information in every space. Visual, audible, olfactory, etc. 
  • Some of that space is on your screen when you’re working at the computer.
  • Some is posters, drawings, whiteboards.
  • Some is discussion happening nearby.
  • Some is via information radiators. 
If you take away the information, then it’s harder to be both focused and aware. Starved of information, we will be less productive and do less valuable work. Think of being the one remote person trying to keep up on the changes to a project and the culture of your org without email/slack/etc. Or the dev org without customer feedback. Information matters.

That’s half the picture. The other is noise. 

Noise is just information that isn’t relevant to the work you’re trying to do or the group in which you’re maintaining membership.

Signal:Noise. Yep, that old chestnut.

But here you go: if you’re in an open space, and you’re sitting among people who are not part of your taskload, who don’t share your goals and outcomes, then being there in all the noise is unsatisfactory. 

Likewise, if people are in groups, but you sit in a group that isn’t your group, S:N sucks.

Being in an office or cubicle isn’t so bad (and can be beloved) if you do solo work most of the time, and your tasks are pretty well isolated from that of all the other people in the space. But if you’re positively interdependent with other people, then walls and doors just get in the way. 

This suggests something obvious: form should follow function; as you work, so should you sit. 

There is one other thing: you need a place for your humanity too: pictures of the family and pets, photos from your favorite vacations, posters from concerts, mementos of good times. Because you’re a human. In an impersonal space, one can feel detached from their non-work life. That kind of dis-integration is not all that healthy for most people. 

Long post, I know.
But I believe these things.
And so I don’t tell anyone what the “right” arrangement is. 


Note: this is republished from an email conversation I had a while back.

Wednesday, December 19, 2018

Q and A on velocity, Part VIII

In our previous post, we discussed doing smaller units of work, and doing no more of them than we need to fulfill an end-user's needs. This way, we get many things done, we deliver more quickly, and we develop less unneeded code (provided we get frequent feedback from users).

Fans of agile software development will recognize this as one of the axiomatic concepts behind agile, derivative of much earlier work on incremental and iterative development.  Without incremental, iterative development and feedback, a process can hardly be said to be agile in any way whatsoever.

So we pick up the conversation today at that point:
A: ...But I we get 23 points now, and they're as big as the 1/19th sized ones.
B: That has two possible explanations.
A: What is the first explanation
B: That they're assigning more points to same-sized work: inflation.
A: Why would they do that?
People in an organization are constantly trying to satisfy their bosses. When organizations put pressure on developers to produce more points, developers start to find ways to score more points. There are some hilarious examples in Joshua Kerievsky's article on story points at the Industrial Logic blog. Sometimes it is intentional but not always.

When a team turns in an 8-point sprint/iteration/whatever, and are disappointed, they often look back over the week and realize that they under-estimated some of the stories which were harder than they looked, and that is the reason it was only 8 points. It should have been 15. In reaction, they give higher point-count estimates the next time.  This kind of inflation isn't to deceive, but to better reflect reality and truth. But now the team turns in a more satisfying 15 point sprint, and exactly the same amount of work is being done.

That people misinterpret this as acceleration is not the team's intention. They merely are taking credit for the amount of work they actually did.

"Scoring points" and "having higher scores" classicly illustrates Goodhart's law. A measure (of completion rate) which is a reasonable measure, once chosen as a goal, ceases to be a valid measure.

You will hear development teams in scrum-like processes talking about work "blowing up."  That means "like a balloon" not "like dynamite."  There was some story that seemed like a 2-pointer, but turned out to take most of the sprint because there was more risk and uncertainty that anyone realized.

A story may have seemed simple: "add a discount code field." The product owner/manager/boss may have said: "that should only be a two-point story."

It seems easy. It should be easy to operate. What could be simpler? It's one field on a form, and you subtract some value from the total.

But these things are seldom simple. Marketing and accounting want tracking, and expiry for discounts, and demographic limits on the usage. Discounts need to appear on the customer's receipt as an indicator that they got a good deal from the company (marketing!). Discounts need to be represented in billing reports.

There may be special discounting reports used to measure the success of promotions. There may be anti-fraud ramifications.

Then someone decides that there should be many discount codes, and they should combine in some ways but not others. Now UX has to redesign the data entry screen, and there are complicated rules. Maybe those need a new microservice?

But it was a three-point story, right? This raises questions:
  • Why did the team only turn in 5 points this sprint? 
  • What's WRONG with them? 
  • Why are they slowing down? 
  • What can we do to motivate them to go faster?
  • Team X turned in 50 points compared to this team's 5, why are they 10x better? 
  • Does this team need a PIP?

These are the wrong questions.  They are based on a faulty assumption, and acting on them will only make things worse. We discussed curiosity spaces related to velocity in Part VI, which you may want to revisit at this time.

Ultimately, the thing to understand about teams using time boxes is that about the same amount of work is being done every week (barring absences and meetings and events on company time). Velocity is mostly illusion.

Consider Goodhart's law, curiosity, exploding stories, and inflation.

Then come back to join us for part IX.

Monday, November 5, 2018

Q and A on Velocity, part VII

In Part VI we discussed faux bottlenecks and blaming. These are often the result of having expectations that cannot be met, a phenomenon that was discussed in the last installment.

Let's pick up the conversation from that point:

A: I want a velocity of 30. Quit distracting me and tell me how to get that.
B: You know that a 19-point sprint means that one point is 1/19th of a sprint?
A: This I remember very well.
B: If you want features to be 1/30th of a sprint, make them 1/30th of a sprint in size and scope.

Our person A is becoming increasingly impatient. Whereas they have asked what seems to be a very simple question (how to raise the velocity) person B seems to be deflecting by discussing systems and human dynamics and methods of estimation, and the futility of improving speed by changing estimation units.

Rather than helping A make the plan and schedule successful, B seems to be hand-waving and declaring the plans and schedules unimportant and unreal. In short, B is not being helpful at all (as far as A can see).

On the other hand, B is quite aware that A has been asking the wrong questions. A is very much focused on the utilization of workers, on causing reality to fit early expectations, and on the disconnection of the workers from the need to fulfill promises. The right questions are just now starting to emerge.

B suggests to A that every week, pretty much the same amount of work is going to be done. If you want to count more things as done, you need to do many small things. If you want larger accomplishments, you must have fewer of them. The area of the rectangle stays roughly the same whether it is 1x6 or 2x3.

In part II, B wanted to talk about ways of increasing the area of the productivity rectangle, by making it easier to accomplish the work. This, however, was deflected by A in Part IV, as A wanted to increase the amount of effort rather supplied than decrease the amount required.

A is still looking for a way of cranking up the intensity for immediate impact, while B is still discussing working with greater consistency for sustainably better impact.






As such, B suggests working in Walking Skeletons, AKA Thin Vertical Slices. This provides a way of being done more often, on more things, and giving the managers the power to decide when a feature is "done enough."


Here is the agile way of working, in tiny pieces that are individually shippable (if such a thing is desired) and which will, together, ultimately be useful to the end-users.

Any feature might reach the 20% Pareto balance point -- the 20% that provides 80% of the value of the feature, at which time it is no longer that important to continue development. It may be that some other feature has become more important to work on, so that the last 80% may never be done.

This is what is meant by the agile manifesto principle "Simplicity -- the art of maximizing the amount of work not done -- is essential."  By doing less of each feature, we get more features 'done'. By doing less, we accomplish more.  By taking on smaller units of work, we complete more units of work.

The assumption that things will be done in entirety because they were visualized and designed in entirety? A non-agile idea. Likely there is a very satisfactory 'less' that could be provided. Not only is that true, but also having a satisfactory 'less' may give the users clarity about what the feature should really entail -- a clarity that they did not have at the fat end of the cone of uncertainty.

Software tends to work in wicked problems, where providing a solution to a problem changes the nature of the problem and problems bleed over into each other. As such, getting fast feedback helps steer products in a way that working hard does not.
A: They would be very tiny features.
B: Yes.
Here we see a light beginning to dawn, we hope, but here A recognizes that B is not increasing the rate at which the team is burning through the backlog, but is again trying to cope with the rate things are being done. A rejects this deflection once again.
A: But I want to do big things.
B: Then you will get fewer of them done for a given period.
A: Why can't I get more big things done per iteration?
B: How many gallons fit in a 5-gallon bucket?
Join A, B, and me for Part VIII.



Monday, October 29, 2018

Q and A on Velocity, Part VI


In Part V, we examined the relationship between working harder and going faster. I hope that the message you came away with was that we want people to work less hard and long so that they can deliver more, though there are conditions that have to be right if we want work to be more productive for the time invested.

Let's pick up there:

A: So how can we actually deliver functionality faster?
B: Ah, now that is a quality question. How long does it take to deliver a feature now?
A: Too long. We need developers to speed up.
B: What % of lead time is represented by developers' cycle time?
A: Why are you asking about lead time? I'm talking about development.
B: Lead time is a measurement of delivery. Development cycle time is only one element of lead time.
This part of the conversation illustrates a lack of curiosity about processes. Person A has clearly decided that development is the bottleneck, and isn't really interested in knowing how the rest of the process may be restricting or preventing delivery.

This is not uncommon.

Quite often the problem of long lead times will be attributed to development, or where post-development testing is practiced, to QA.

A friend of mine was involved in a large time-critical development project as a testing professional. He was called up in front of the directors of the company for "delaying the release."  The director of software development (separate from QA) argued that the QA department was not "passing the tests fast enough."

It was said un-ironically. The situation was that the QA department was finding an exceptional number of defects (partly, perhaps, because the development director decreed no testing, no design, no refactoring, and no testing in development in order to "speed things up"). The QA department was returning the code for rework, which was slowing down new development.

The problem? Well, the queue and the inventory appears at QA, so the QA people must be at fault for not moving the inventory through more quickly, right? Certainly not the developers who had faithfully produced mounds of defective code as fast as possible for months.

In other organizations, we have found development blamed for features that are stockpiled by marketing, who are waiting to have enough work produced that it makes a great release party advertisement. While value could have been released to customers, the idea of "making a splash" was too much a part of the culture of the organization. Development took typically a few days per story -- quite quick, really. But those features sat in the queue for months. As a result, some people in the organization had decided that development was "taking months to write the code." In this case, an architect refused to hear the statistics and the numbers and measures. "I disagree," said the architect who was presented with the stats, "the developers have to go faster."

Why is this? Because development and QA are where the rubber meets the road. In part III we stated that how long it actually takes is real; estimates are made up. When perfectly good (seeming) plans aren't met by real work, most of us tend to meet the disappointment by placing blame. The people pinched between the real work and the intentions are the easiest to blame. 

Where we should see a curiosity space, we tend to see a "problem."


If we see the difference as a problem, rather than a curiosity space,  we tend to see the people who are at the bottleneck as being the bottleneck.  This is an error that should be obvious to people with a modicum of systems thinking, one would think, but most people under pressure are more prone to emotional reaction than careful systems thinking.

Let's not blame people for not seeing what we hope would be obvious. To Pillory people for their "ignorance" is seldom helpful. It's just returning blame for blame. How does that help?

Instead, if we still our impulses we can approach the problem with a heart that is at peace, and seeking solutions rather than confirming suspicion and blame. Curiosity (and our sense-making apparatus) might have a better result here. 

A: I don't want to deal with lead time. Can we just focus on developer cycle time?
B: Of course, but you may not be working at the bottleneck so it may be wasted.
A: I only want to talk about developers.
B: Okay. As long as you are aware it may be pointless.

And here is the deep truth of this installment, which echoes so much of the Theory Of Constraints:
If we are not working at the bottleneck,
our efforts to improve flow are pointless.

This means that we may have to move beyond our unchallenged assumptions. There are a series of statements like "it's the devs", "it's QA", "you all aren't working hard enough", "you aren't taking this seriously" -- all assuming that problems are motivational rather than technical.

If one is faced with such a stonewall, what does one do? Let's pick that up in part VII.

Tuesday, October 23, 2018

Q and A on Velocity, part V


In Part IV, we talked about the thorny clashes between the reality of promises, and the reality of development. This installment picks up on some truths about working harder and going faster.

A: Wait a second... You never actually said we can't go faster. You only said that trying harder and adding people weren't the way.
B: That is true.
A: So we could possibly go faster?
B: Certainly.

Frankly, we don't know how fast developers might be able to go. I've had friends call and tell me about taking on work that was estimated and doing it in a morning when the code was well-factored, readable, and well-tested. My friend said that "all the functions I needed were already written and easy to find."

Robert Martin always said that the speed of today's development work mainly depends on the quality of the code you will be working on. Low-quality code? Low speed. High-quality code? High speed.

In addition, we've asked around and found that developers spend about 25% of their time in meetings (the more senior spend more time). That's 25% right off the top.

Of the time remaining, most developers are spending 60%-80% of their time fixing defects. That's a heck of a lot of time. If they could go two times slower, but produce no bugs, they would be at least 10% faster.

Of the time that's left, silos and approval processes and other interruptions force developers to spend a bit of their time waiting for other people. Work sits in queues.

And of course, when a programmer's work has to wait in a queue, the conscientious programmer keeps busy by pulling an extra task off the queue.

Now when work is returned from testing, or from an approver, or from some kind of review, it has to sit in a queue waiting for the developer to be available again. The average corporate programmer we asked has about 5 concurrent tasks (or branches) open. It can be as high as 15.

You can see the time is whittled away to near-nothingness. It's no surprise that a piece of work takes forever to develop, even though the developer is busy and has a lot of tasks in progress at any given time.

Developers say "I'm not blocked, I have plenty of other tasks to work on," and it's true. But the work itself is blocked. The work is not going to production soon enough because the developers are busy doing something else. This is why Lean advocates say "watch the baton, not the runner."  Flow efficiency is about the throughput of a system, not about the busyness of the workers.

It's a messy system, made messier by processes that include a lot of silos, individual effort, quick plate-emptying, approval cycles, branches, and what-not.

Most of our systems seem to do more software
prevention than software development.

But still, the quality of the code is the big determinant. If the code is well-organized, and good tooling is used, and the developer is familiar with the system, and the change doesn't introduce any major architectural rearrangements, ... well, how amazingly fast might it be?

Faster should be easily possible, but getting there may be hard human work, and not just for developers.

On the topic of hard human work:

A: So how can we get the developers to work harder?
B: Take away their tools. Add more bureaucracy. Use slow computers and dead programming languages.
A: And that will bring up the velocity?
B: No. You asked how to make their work harder.
 This highlights a problem in the approach to getting more done.  In Part II, we hinted at this. In 2009's Platitudes of Doom article, we went on about this at length. There is this assumption that the reason work is slow is that people aren't working hard enough.

The focus on business and effort is not helpful.

The above conversation is a bit tongue-in-cheek but illustrates the point that the problem isn't in busyness and effort expended, but in results being attained.   Most programmers are working far harder than they should have to and creating far less value in doing so.

If we insist on putting more effort into the work, we end up getting more brute-force, and not the best results we hope for. This will be discussed more whenever I get into writing up The Journey.
A: We want them to work harder so that we get more done.
B: If you want to accomplish more, shouldn't you make the work less hard?
A: We just want it done.
B: Of course, and so do they.
Here is the interesting magic. If we want people to get more done, we are more likely to succeed in removing the waste and uncertainty and risk from the process, rather than by putting greater effort and strain into it.

Again, in part II we talked about the formula of effort provided to do a task over effort required. Scroll down to the graphic that illustrates this point and the text that surrounds it.

 This snippet of the discussion is really just restating the points that you've already seen.  But there is some chance that you are not a long-time reader of this humble blog, out in the wastelands of blogger's dark corner, so maybe it will help to give you some more reading on the topic. To wit:


Maybe the programming isn't slow, considering the condition of the code base and the presence of risk and uncertainty in the requirements, the difficulty of predicting intellectual effort, etc.

Maybe the estimates are low, rather than the work being slow?
We don't really know. 

We only know that there exists a difference between what we expect (and want) and what we are really achieving. We can look at that as a betrayal, or we can look at it as a curiosity space.

I know which of those choices leads to blame and frustration, and which leads to alignment and improvement. 

But I'll let you ponder that until Part VI.