Saturday, February 27, 2010

Career Pathing?

I know that companies have the best intentions. They want to provide opportunity for their employees in hopes of retaining them and encouraging desirable behaviors. They want to reward those who perform well and eliminate the dead wood. Even so, it is 2010 and career pathing is done, like annual reviews is done, like the 80s are done. This is doubly true for agile organizations where the dynamics are so very different.

The old idea was that an employee would join a career path, and that path would lead them to some management or technical position that would provide them with respect from peers and superiors, autonomy (perhaps their own team to lead), and increased compensation. Employees could see if they were tracking or not, and this would provide incentive to excel by doing the things the employer most highly values. This is not a bad idea and it used to make a lot of sense.

Now career-pathing is an answer to a question that nobody is asking.
  • Formal roles and titles have never meant less. Organizations are (thankfully) flatter now, and teams are likely to be mixed groups of seniors and juniors and consultants and contractors. Gone are the days of the guru programmer directing a team of insensate code monkeys. Gone are the days of specialized stratifications. Generalism is killing the career path.
  • The average time a programmer stays with a company is 3 years. Many of us have outlived multiple employers, even. We don't have the long-term stability that the IBMers had in the 70s and 80s. What is the sense in pathing a career for someone who won't be around? Why build a path that lasts longer than your company? Temporariness is killing career pathing within any given business.
  • Programming is not a management skill (as management is practiced), so a path that leads one from writing server back-end code to vice presidency does not really make sense. The Peter Principle explains how much it hurts to promote people out of their competencies. Rising through the ranks is not necessarily the value proposition it was once considered. It still makes sense to groom someone whose interests lie in the area of software development management (Lord knows we need competence there), but in general, the Peter Principle has called career pathing for programmers into question.
  • Career pathing limits rather than expands options. A technical person has many directions in which to grow. They may take a strong interest in IT and tool-building, database management, performance management, systems administration, user interface development, team coaching, technical writing, technical management, human resources, or any other useful skill. Many paths are necessary to move the professional forward in a way that is meaningful to him.
  • There is a means/ends juxtaposition in career pathing. It becomes important whether one is tracking well, rather than whether he is doing good work and producing value for the company's customers. It has happened often enough that the things that make our customers successful are not the things that get us promoted, and can even get us fired. Distracting from customer value degrades the career path.
  • In a typical career path, employees compete against each other for increasingly rare positions at higher levels. Esther Derby has written much about how this defeats morale and teamwork. In some situations one might find the "kiss up, kick down" strategy to be successful, where one panders to his bosses while sabotaging his peers and underlings. By political maneuvering, he is promoted (even "fast-tracked"), at the cost of productivity and harmony within the company. Distructive abuse of the career path discredits and defames the process.
  • Organizations adopt an "up or out" point of view, so that people who perform well in their current position but show no enthusiasm for the next station on their path are devalued. Depriving the company of skilled, satisfied technical workers brings disdain upon the career path.
  • Again referring to Esther Derby's work, we find that it is not possible or advantageous to try to isolate and rate the work of an individual in a collaborative effort. It is sufficiently hard and damaging that career pathing among highly-productive, coordinated teams actually reduces the effectiveness and harmony of the organization.
  • Career pathing isn't even a particularly good way to award respect, autonomy, or compensation. Profit sharing, cost-of-living adjustments, etc have shown to be better ways to award people who work on teams.
  • It's not a lean nor agile way to proceed, as career pathing done well is a complex system that is not needed to produce quality work and increase team collaboration. It is unnecessary to improve quality and morale. It is an expensive waste of time and a distraction from the work of doing work.
  • You already know who the skilled programmers are, who the subject-matter experts are, who the weaker players are, who has leadership potential, who is watching the calendar and the budget, and who is just watching the clock. On a modern team (esp an Agile team) we try not to separate the leaders, but keep them in the team to bolster the weaker players. A strong team member earns respect from his peers by doing good work, which lends him further autonomy. An official tracking system only serves to usurp earned organic authority, which casts further doubt on an official promotion track.
Realize hat the author claims to have no special HR savvy other than having been a human resource for going on 31 years as of this writing. From his viewpoint, the career path is naive, damaging, and irrelevant in 2010. He remains willing to be convinced otherwise, and welcomes all arguments to the contrary as well as encouragement and further mentoring on the subject.

Wednesday, February 24, 2010

The snowball

So Scott and I are pair-programming, when we realize that there is code in our test setup that ought to be in the model. We look and find that it's not, but it is duplicated in a number of other test setup routines. Oh, no. Time to take a code improvement excursion. We know that it's bad form for test routines to contain code that duplicates the system, and this looks like a low-fidelity reproduction.

So we find the class that OUGHT to contain it, and transfer the method. Now we go on a quick excursion through the code base and find the code that actually does the work in the middle of a fat method in a UI code-behind. Deep sigh.

We change the production code so it is using the model, including our new code, correctly. Pop the stack, continue. Then we find (via resharper) the same code duplicated in a dozen or so other places in production code. Take a deep breath, push the stack again.

We move through the code base simplifying a bunch of methods by replacing the duplicate with a call. In the course of doing so we see a few very puzzling uses. We realize that this code was modified ever-so-slightly to have a slightly different effect. Deep breath, stack push.

We add the new method and fix the code that duplicated it. We mention to each other how much cleaner the code is with some of this cruft removed and replaced with a simple method call.

We find a very strange bit of code in the setup for a test on a report. We can't imagine it ever working. With the test setup corrected, the test fails. Push the stack. We dig through the test and find that it in fact did not work right, relying on a failure in setup to produce a result that looked good if you squint at it just right. We call our Customer and ask how it should work. Looking at the setup and the test assertions, he can tell it is wrong and never should have worked as specified. He corrects the spec, the test runs green.

We get it working, watch all the tests run green, smile at the amount of code we've cut out of the system and the report bug we've fixed. Then we realize that we just did that in our feature branch and not the master branch. Our next merge is going to hurt.

Day is over, goodnight to partner, supper with family, band practice, and back to email and blogging for a little while. Tomorrow we go back to popping the stack.

It is past my bedtime on a Wednesday night and I'm thinking about code. I need to get some sleep so I can hit the ground running tomorrow. I have a personal holiday coming up soon, and want to get some real progress while I can.

These days can be fun and trying at the same time. We look forward to the whole code base growing cleaner and tighter over time. I get a little burst of perverse joy at figuring out what bad code is doing, and a warm glow when we fix it. I worry when I realize that we still have an underdeveloped code system. I worry when we will finish our assignment.

I know darned well that cleaning the code is part of working in the code. The approving angel on my left shoulder whispers, "Code must be unique. Duplication is decay. Unique code is clean code."

A disapproving voice whispers from my right shoulder, "Code that works now is worth more than code that will work eventually." Maybe I could have gone a little further before taking on this effort. In my defense, it was all green when we started refactoring, but I don't want to miss the coming release.

This is the life of a software developer working in code that holds on tightly to its contracted big-bang origins. We've worked wonders this year, and yet wonders remain to be worked.

... to be continued ...

Survey Frustrations

Via AgileVoices, I found a a survey that asks all the wrong questions.

On the first page, it asked me for my role in the agile team. I selected "other" and typed "member", since the other options segregated roles is a pre-agile way.

The next page was all about whether my agile project was a success. Of course success is defined as being free from requirements changes, on budget, and on schedule. I didn't go any further.

If we try to measure our agile projects by waterfall standards, I think we'll find them all to be pretty poor waterfall projects by the same sense that my son's dog is a very poor water dragon and his water dragon is a pretty lousy dog.

The whole point of agile is not to commit to a set of unchanging requirements up front, but to deliver a useful product at the end. The point is that we all work together according to our strengths and the need of the hour, not that we segregate by roles and tasks.

I'm simply disgusted. This is 2010, and we are so clueless?

Monday, February 22, 2010

Simple V. Clear V. Easy V. Primitive

I once posted a blog about the meaning of the word "Simple" as used in "The Simplest Thing That Might Possibly Work", an activity that generated a little buzz and first steps toward resolution. The problem I realized is that "simple" gets mixed up with "easy", "primitive", and "clear" in ways most unfortunate. This was true of myself as well as others in the conversation.

As I search the web for words like simple and complex, clear or confusing, perhaps easy and hard, I realize that we're all splitting hairs in different ways. I suggest that we consider some more crisp boundaries when discussing qualities of code. Here are the axes I suggest:
Simple v. Complex: relating to number of parts and steps involved
Clear v. Puzzling: relating to ease of comprehension
Easy v. Difficult: relating to ease of implementation
Developed v. Primitive: relating to degree of investment in types
In this terminological divide, simple v. clear v. easy v. primitive is a much more simple categorization.


The term The Simplest Thing That Might Possibly Work is aimed at avoiding blockage. Ward Cunningham points at the idea of the shortest path to a solution as "simplest", while still regarding refactoring as necessary to reach an increasingly simple form (adding a touch of fuzz to "simplest" in original context). He also addresses the problem of clarity, but clearly "Easy" is part of his process.

I have touched on this in my article on using better names where I show an example of perplexing python code, and through the act of renaming I make the code quite clear.

One of the problems with the article's original code is that it is too primitive, and being so very primitive increases the number of parts (methods, variables, constants). Eventually I create a method (is_flagged) which moves the primitive subscripting and literal values out of the way, leaving the example routine in a state that is not only more clear, but also more simple than before. The addition of a class with a named method makes the code more developed (less primitive) while also making it more clear (less perplexing) and leaving fewer twiddly bits in the original routine (simpler). Doing this required relatively little investment of my time and effort (easy) beyond reverse-engineering the routine.

In the final example, I make the code considerably more terse. This brevity is enabled by the clarity, which is also enabled by the simplicity. The result is something most python programmers (and most non-python programmers) can apprehend at a glance.


We can choose to understand "simplicity" as a simple count. Each variable, constant, and operator is a part of the routine. Each branch in the routine is a part. When there are a great many parts involved, the routine is 'complex' even if it is easy to understand. If there are a very few parts, it operates simply even if it is terse and cryptic.

Likewise we can count the steps involved in an operation. The more steps, the more complex. A step is a single operation, whether assignment, function call, calculation, or instantiation.

It might be interesting to note the similarity in the CRC metric's use of "complexity" which is a count of possible paths through the code.


Clarity is more-or-less orthogonal to Simplicity. The first, unworked example in my article was quite unclear. I can hardly call it complex, because if x[0] == 4 has only one variable, two operators, and two constants. That is a rather simple structure, even if we add in the for loop and the additional list variable to hold the selected elements. Though it is simple and primitive, it would be sinfully perverse to refer to it as "clear."

While code can be simple and yet unclear, it may also be fairly complex yet clear in intent and usage. These two attributes are confused because sufficiently complicated code becomes perplexingly unclear, and sufficiently simplified code usually becomes relatively obvious in intent (or at least becomes easy to clarify with a simple act of naming).

One might add "brevity" as another virtue, with the note that simplicity, clarity, and brevity together combine to form something we refer to as "elegance." Complexity and obfuscation combine to form something one may refer to as "opulence" but which most would recognize as "a mess."


An easy solution is best described via twitter by Gary Bernhardt (in an echo of Ward Cunningham) as a short "distance to a solution". It may be easy to drop a nested loop with an "if" statement into the current routine. It might be to re-purpose a variable for "just this little block of code". Gary's full quote about code obfuscators is that "they focus on their distance to a solution over the readers' distance to an understanding."  A short distance to a solution is "easy", whereas a short "distance to understanding" is "clear."

Note that an easy solution to implement may be neither simple nor clear nor brief nor well-developed. It might be easy to write an if/else, duplicate the entire method in each clause of the if, and then modify one of them to cover a new condition.  It would make the code doubly complex (duplicating the operations and operators, plus one) and much less clear (as you'd have to hunt for the differences).

That being said, we like things to be easy, provided they don't compromise simplicity or clarity. Our ideal is to maximize simplicity and clarity so that future solutions will be easy to implement well.


We all know of the code smell called "Primitive Obsession" which has some deleterious effects on simplicity and clarity.

If we have primitive obsession, we surround our classes and methods with little clouds of variables. A classic example is passing latitude and longitude packed as integers instead of passing a coordinate object. Each method using the pair takes on more parameters, the parameters have less specific types, the routine must perform more primitive operations, and it passes these values to its subordinate routines which take on a similar burden. Duplication of functionality is virtually guaranteed.

Complexity cannot vanish from a system, but it can be moved to a place where it can be treated as if it were very simple indeed.

Sometimes a more primitive solution is appropriate. In python, I once replaced a monstrous dictionary with a primitive list of tuples and received a huge performance payback. The more primitive container was right, but it made my code less clear and less simple. The more primitive solution was not easier to write or to read. Mind you, the real problem in my example was that I had chosen the wrong developed abstraction. I replaced it with a purpose-built class that was clear, simple, and performant.

Why Does It Matter?

If we work in agile teams our goal is to always build a future in which programming is easier, faster, and cleaner. To respond quickly to changes, we must be able to read and understand code quickly. If we allow simplicity and clarity to degrade in the short term, we find ourselves having an increasingly difficult time producing quality software in the near-term future.

Tuesday, February 9, 2010

The Problem Of Copying

Many programmers see their jobs as a process of gluing together code fragments to make things work. Many of them produce results very quickly and are lauded for productivity. TDD, mocking libraries, dependency injection, and refactoring tools seem to them a silly waste of time; they can hack working examples together faster than most "craftsman" programmers.

There is some validity to the approach. Quite often programmers find code samples on the web or in a very fine "python cookbook" or the like, and they copy them into their code base. Likewise, an API comes with example code for a reason. There is nothing wrong with starting from a good external example, as this will often jump-start your application's use of an api or technique. If plagiarism is avoided (ie samples are public domain or appropriately licensed) then this kind of copy-and-paste may be valid and useful.

The problem comes when copying and pasting withing an application. In this context, it is a quick-and-dirty approach to programming. A search of the web or twitter for the phrase "quick and dirty" will show primarily examples of the term being used as a virtue or feature. It suggests that the information it references is wholesome, folksy, and free from self-indulgent over-revision. In such cases, I fear that the word 'Quick' has distracted the reader and that the word 'Dirty' has been swept under the rug.

Dirty is not a virtue. While code-copying code is a known development disabler, it is hard to tell copy-and-paste programmers that the way they work is wrong (let alone telling their managers!). Quick is clearly a virtue in the marketplace, so how can it be wrong to be done sooner?
"The trouble with quick and dirty is that dirty remains long after quick has been forgotten." ~Steve McConnell
The problem is that they're not done sooner. Copy-n-pasters just stop sooner, and the code goes out in poor condition. Lacking unit tests, we can't know if it is really done, and code pasted together in long stringy functions is notably hard to test. To make it testable, it usually has to be refactored into methods, which then are refactored to remove duplication. If we aren't refactoring, it's rather unlikely that we're testing. If we haven't tested the code then who is to say it's done? If there are not tests around the methods up and down the call stack from our paste-receiving function, who is to say that we've not broken some other code? And what of code not directly in our call stack, but which touches upon the data or structures we've modified?

In what kind of bizarro world is "broken" compatible with "done"?

Would anyone sign up for a program where we would intentionally degrade system development speed by even 5% per quarter for the life of our product? Would we approve a continual increase in both fresh bugs and error regression? It is unlikely, all else being equal.

Would businesses sign up if they could have the next five features implemented before the mud hits the fan? Most of them would, and gladly. The problem is that "quick and dirty" sounds very quick and not so dirty. There is a lot of pressure to quiet the angry customer, fulfill a contract, or close a sale. It is foolish to deny that "sooner" matters. Any programmer worth his pay would love to be able to go home with even 10% more accomplishment. "Faster", sings the siren, "faster still!".

The allure of copying is that it causes problems that accumulate "later" while seeming to give benefits that are paid "now." The ability to defer gratification is a famous indicator of a student's likelihood of success, but businesses exist to gratify customers as quickly as possible.

The promise is false. Code copying does not position us to work faster next week, but slower. Say you have five lines of code in an if/else statement inside a loop in one of your functions: code which converts a Lead to a Customer. I need to do the same, so I cut those lines from your function and paste it into mine. As a result:
  1. The compiler has to compile that code twice.
  2. If there is a defect, it needs to be corrected twice or it will be reported as a regression.
  3. If I realize an improvement in my code, I have to back-feed it into yours or callously leave yours unimproved. Is it likely I will spend my feature development time fixing your code?
  4. There is now even more untested code in the system.
  5. If Lead-to-Customer conversion gathers new requirements, we have to find both examples and enhance them. Heaven help us if we miss the one the customers actually use most!
  6. We are working against our IDE, which would happily have located the conversion method for us, had it not been scribbled into the middle of our individual functions.
  7. If I write unit tests for my copy, yours is still untested.
  8. When QC tests your functionality, they will need to cover the edge cases of customer conversion. They they will need to cover those cases again in my code.
  9. When our two versions diverge, programmers may use either my improved or your unimproved version. They may reinvent my improvement, wasting time for little gain.
  10. Our business people request that our coworkers build a batch customer conversion feature, but there is no function to convert leads to customers. The coworkers must reinvent the process or find and evaluate our copies. Either way, they have wasted development time.
  11. The code we pasted into now is doing multiple things, making it harder to read and comprehend.
  12. The code we pasted into is likely leaking variables that would have been encapsulated in a single function call.
  13. It is very likely that our merged functions are harder to optimize than they would have been with a shared function call in place.
If the code is written perfectly, never needed anywhere else, and subject to no requirement changes, then copying would be harmless, but in such cases copying is not necessary. To copy code, then, is to ignore the consequences to the team in order to seem to be done sooner. This is an act of selfishness.

The bias against code duplication goes well beyond being a pet peeve of the Agile Otter. It has been well-explained by many programming texts. For a few online references, see what people say about duplicate code at Wikipedia, C2, Ralph Johnson's blog, the "prag progs", or the abstraction principle. Big balls of mud grow from accumulated dirt, and much of that dirt is the residue from "quick and dirty" programming. After all, what is mud but dirt that is not DRY?

Habitually copying and pasting code among function bodies is not an act of heroism, but rather an act of corporate sabotage. It is sad that we do not treat it as such.