Friday, July 16, 2010

Copy And Edit Revisited

Vadim reminds me that I need to address root causes of Copy-Paste-Edit programming, rather than merely ranting about how bad a practice it is and how it ruins good code. Of course, he is right. That is part of being Vadim.

I've previously ranted about the ill-effects of copy-paste-edit programming, but it would be unfair to say that there is never a need for it, or that people who did it were simply stupid and lazy.  The problem would not be so prevalent if it did not have some reasonable basis for practice. However well-intentioned and useful it is, its net effect on a code base is overwhelming negative.

Here are a few root causes I recognize, and I'm open to hear more.
  • Tedious construction semantics encourage copying. Many APIs are very thin access to bean-like objects, and yet using them correctly can be a trick. You have to know what to set, and in what order, and what to call next.  It is far easier to copy a correctly-set-up object use than to make one from scratch.
  • Copied code is a working initial state from which to make progress. This is useful not only when dealing with complicated code in the current system, but is especially true and especially beneficial when copying code examples from docs, books, or online code repositories.
  • Complex multi-line operations have many ways to break. If the system shows poor cohesion or has multiple ways of doing the same job (and some ways work better than others) then copying a usage that works seems pretty safe.  It is worse when the API has a "seed and harvest" interface, where one must set certain variables (seeding), then call a function, then collect the results from various variables (sometimes the same ones, sometimes not).
  • Copying minimizes the chance of breaking existing code, which is the primary fear in legacy systems. If some (or most) code is ill-tested, then any editing of existing code can fail in unexpected ways. Copying an algorithm and changing the variable names to match the local context preserves the original code. 
  • It is easier to copy than to study.  Copying code that pretty much works now, you can make minimal changes and hopefully it will work without you having to understand the underlying concerns. It allows you to get in and out quickly without doing research. Copy/paste creates a point in time at which it is basically sound, and can be refactored to purpose. 
  • Moving code into functions requires thinking about design.  Is it one method or several? To which class does it belong? The original owner of the code? The new code? A class currently used by both? A new class? An existing library? 
Is copying always wrong? I think it is not.  The act of copying itself should kick in your "spider sense," but it is not necessarily harmful.  Copying examples from outside the system might be useful as a starting point.  Copying code from inside the system (including copying an existing test to help create a new test) can be helpful to the programmer.  However, having duplicated code in the system is always wrong, and having code that is placed badly is wrong as well.

It could be useful to put some rules around your use of copying.
  • Don't leave duplicated code in the system. If you copy, consider your copy to be merely a starting point, and also point of technical debt. Pay it off quickly by refactoring the duplication out of existance as soon as possible.
  • Even when you write unique code, extract methods and move them to their most appropriate class, so methods can be called instead of copied next time. If you leave manipulations in the user of a class, you are encouraging the next user of that functionality to copy it as well. 
  • If the code is ugly enough (complex intialization, multi-step operations), extract the non-unique parts as new methods called from both copies. Interfaces do not have to be ugly and complex.
  • When you find duplication, or create duplication, finish your refactoring step by sweeping the code base for other copies, and correct them likewise.  It doesn't help if two versions out of a dozen are refactored. You want the whole system to improve as you work.
  • Copying might not be so bad if you refactor the area you're copying from and move the extracted methods to appropriate new homes.  Copying one or two function calls might not hurt you like copying blocks of code.
Everything I said about the evils of copying code still stand.  If you copy a block of code, you are probably going to screw over your whole development organization a little (especially when the system changes and your original is no longer correct).  Copy-Edit is still the way to make a bloated mess of your system.

Friday, July 9, 2010

Stop Playing Catch And Release In Quadrant 4.

Basically, a bug is a combination of a design flaw and a testing gap.

When I say design, I mean "design-as-written", not some keen plan that some keen mind build in some expensive drawing too. I mean the logic of the system. Built code can be said to have a design, in that it is intended to work in a particular way and has been adjusted to work in real circumstances. A non-coding architect's idea for a system is not a design. It is merely a guess, or perhaps a dream.

If code has a perfect design for a given set of tradeoffs, then it will work perfectly for that given set of tradeoffs. If it must run on windows and use sql server, then it will work as well as possible under those conditions. It may fail because of supporting systems, or because IE/ISS ate all the cpu and ram, or because it is interrupted, or starved out by virus checkers, etc. For where it lives, and what it does, it will work great... provided the code is written to use the design.

Side note: I'm not saying that a design can be devised up front that will be perfect, or even that a design can be evolved to perfection (necessarily). I do not think that a perfect design is possible when working in a waterfall manner, and am firmly convinced that "perfect" is a concept that wiggles -- it changes meaning as a system and its environment evolve. If there are ever moments of perfection in design, they are brought about through incremental change, and are eventually spoiled by changing requirements and systems. But bear with me, and assume that there are possible moments of perfection.

Perfect test coverage would be an automated suite of tests that exercise and evaluate every possible behavior of a software system. That would be every logic branch of every component of the system under every set of circumstances. It doesn't happen in practice, but that's not because it's impossible but because it is costly.

If perfect test coverage and perfect design are not possible, it is mainly because we don't know how to do them in a cost-effective manner. As a result, users are comfortable with a certain amount of slop provided it does not prevent them from getting work done and living meaningful lives. Some users are used to more slop than others, since some will reboot their machines a few times a day and others scream if they have to reboot once a year. Some will retry a task in several different ways, others will throw up their hands. Bugs are tolerated if their frequency and consequence are slight.

So lets draw this an an extreme set of opposites, in a quadrant.  Realize that most code is either mostly tested or mostly untested, and either good or not-so-good, but a discussion of extremes helps to clarfiy our thinking about software defects.


Quadrant 1 (tested, perfect) code is as good as it gets. Q1 code is blameless. Though the system around it may misbehave and cause ill effects, Q1 code does the best possible job in its environment. All code wants to be Q1 code, and all users want to think they're purchasing an all-Q1 system. All programmers would prefer to work in a Q1 system, because it is easier and more rewarding to do so. This is the holy grail, but is rare due to perfectly legitimate organizational priorities that prevent the creation of such perfect systems. Sometimes customers and markets don't *really* want perfect systems, if it means not having some other feature that is attractive. Tradeoffs are made.

In Quadrant 2 (untested, perfect code) there are no currently-known defects and the system seems to operate perfectly. This is a mysterious place to be, because defects can arise at any time since the software system and its business and technical environments will change over time. When defects appear in quadrant 2 they will be discovered in the wild, not in the lab. Q2 is comprised of code that is at risk. The best we can say of supposed Q2 code is that it has not yet proven to be Q4 code.

Quadrant 3 is comprised of discovered defects. Some of these will be fixed before release, and those whose effect and frequency are slight may be deferred to future releases. Tests on known bugs-of-little-effect will tend to be ignored or disabled (effectively moving the defect to Q4) until the flaw becomes interesting enough to resolve.

Quadrant 4 primarily is the result of irresponsible development or intentionally ignored known defects (see Q3). The design flaws are not well-understood or demonstrated in automated tests. Running code that is imperfect and untested is rather a crapshoot. Users are essentially alpha testers, and should not expect consistently good experiences. Trying to fix defects in Q4 code is a nightmare and fraught with disappointment. The best way to fix Q4 code is to move it first to Q3, then try to move it to Q1. The worst way to fix it is to try to push it up into Q2.

Almost all software systems in existence have some Quadrant 4 code and some Quadrant 3 code. TDD systems are mostly Q1 and Q3, but legacy systems all have some Q4 code and might have some Q2 code (not that we could tell).

Software that doesn't work is in Q4.  If you have hundreds of bug reports, then you have hundreds of lines of code in Q4, where it's not very good code and you don't have good testing. Your bugs tell you that your code base needs some unit testing goodness applied.  If you simply fix the bugs without putting testing around the site of the defect, you are playing catch-and-release in Q4 and nothing is really going to get better for your product or your organization.

The goal for software developers should be to move code from Q2 and Q4 through Q3 to Q1. I can see hardly any reason why developers in 2010 would even consider working in Q2/Q4, and I particularly cannot see why organizations would believe that group of people with mixed experience and skill sets would consistently perform in Q2.

Q2 is essentially a myth. Your Q2 software is probably Q4 in reality. Maybe you should start investing in the move to Q3.  Stop playing catch-and-release