Monday, April 27, 2009

Rethinking The C Word (clever)

A friend rightfully takes me to task:
The anti-cleverness you describe is anti-badness. I think we all can agree that bad is bad. But you and most of the other anti-clever crusaders have no definition for it other than it made your life miserable or you at least don't like it.

And then you assume someone you never met thought they were being smart. That's where your ice gets really thin. You weren't there and you don't know what the constraints were at the time.

And what really makes it irksome is that you have a whole library of well defined "bad code smells" which are a lot more specific. Why don't you use those? Those have meaning and in many cases they have remedies.

It comes across as smug blaming of the guy who left, and dirties what I've observed is an otherwise a healthy data driven approach.

See, I like it when my friends help me keep it honest and real. If one wants to improve, one must remain introspective and consider feedback when it is given. Desiring to be both wise and pleasant some day, I thank my friend for this rebuke.

I should not assume that someone is proud of the way this code turned out, or that they did it to satisfy their own ego. That is as at least as likely as not to be unfair and wrong... and smug.

I'd not heard of the Anti-Cleverness Crusade before. I feel less lonely in my smug, opinionated jerkitude. I thought it was just me, Wally, and occasionally UncleBob. I wonder if there are membership dues. Or tee shirts. I could use a tee shirt.

When David Tennant as Dr. Who says "clever", I know he means "ingenious." Ingenuity can be wonderful or it can be awful, or perhaps wonderfully awful. In the series, the enemy is generally "clever" in the sense of being "ingenious" and "diabolical."

A bit of research (*cough*google*cough*) and I find something that reflects the sense I'm hoping to convey. Rube Goldberg's self-operating napkin is certainly clever, and ingenious. From the article:
A Rube Goldberg machine or device is any exceedingly complex apparatus that performs a very simple task in a very indirect and convoluted way.
When I say "clever" I am referring to misspent ingenuity. It is intelligence gone wrong.

I don't know a good word that says "unwanted, wasteful, vulgar ingenuity". My friend Wally simply said "clever" with that clearly pejorative inflection. It stuck with me. I have carried it for some years, and have spread the use of the term. I'm sure Wally picked it up somewhere as well. Having some handle for the idea has been helpful. I can't tell you how many times I've rethought some bit of work because it might be (in this sense) "clever."

It is not merely "bad". If it were bad, the code would not function correctly. The NOT (A XOR B) is a small thing, and I figured it out with a short pause, so it's not opaque or indecipherable. It has only two operators (where one is sufficient, so 200% overdesign). It does the right thing provided A and B are booleans and implemented in the expected fashion. It *works*. That makes "badness" a difficult call.

The term fragile doesn't properly apply though the code invites misunderstanding. It doesn't depend upon conditions not algorithmically guaranteed in the code around it. It doesn't have a set of values within the domain of the parameters that will provide undefined behavior. It's wrong, just silly.

My issue with the !(A^B) code is that it is indirect and convoluted. One has to run the truth table in his head. Knowing that T^T is false, and F^F is false, and that T^F and F^T are true, and that the result of that operation is inverted so that T becomes F and F becomes T. So it is F if A^B is T, which is the set of cases when A and B are non-identical. Therefore, it is the value of not-not-(A==B). Needlessly convoluted, but approachable. A bit high-handed. It's hard to not see it as arrogant, though we will endeavor here.

The rest of the framework I have trouble with indirectness. You have repetition (violates DRY) but it's only near-duplication. Some of the code being for rendering, and some of it for compiling data from the database, and some is for both. It takes time to determine which is which. You have to dig into the code, or conduct code experiments to see what values end up where, and which 'names' are actually keys that must match in multiple dictionary structures for the code to run. You have to know which constructor parameters are ignored (some are) and in which circumstances. It is implicit, and indirect, and convoluted. And largely test-free.

It is possible that the original author was so intelligent that he/she needed no cues or clues to know which part did which thing nor why. Perhaps they could hold it in their heads without struggling, even across days or weeks. Perhaps the indirectness seemed to them to be ingenious, or maybe they were ashamed that they couldn't think of something simpler. Maybe they were scrambling to get something out the door (a sin in itself). Maybe they were used to working with code that was far less clear and direct.

My friend points out that the original author(s)' motives are lost to me, and that it is poor form to speculate and/or lay blame in the absence of such information. I accept that criticism with thanks.

Whatever the reason, the code requires that the readers make a noticeable investment for little reward. A better framework (hopefully whatever we're getting next) will be much more direct, obvious, and clear.

I am willing to lay aside "clever" and accept a better term. Perhaps "clever" was a needlessly clever term to use. I wonder what Wally would say.

Best Rube Goldberg Ever - Watch more Funny Videos


  1. Wow, now I feel bad for my tone. :-) Someday I will learn to be all humble like you Agile consultants.

    There is no anti-cleverness crusade. It's a personal gripe of mine.

    My assumption about how that evil report framework grew was that it's an organic thing.

    Let's invent a simple report framework and make a few reports in it. Nothing too fancy. Keep it real and YAGNI. Now we need to add some feature. We add some tests (first mind you) and add some features to the report framework, but kinda wedged in, cuz they don't fit well, and make the new reports. We don't refactor because there's no time and we are alone and don't have any good ideas. How do you refactor a template mechanism? Mostly, you don't. Maybe four cycles like that and you are already out of control. Ten or more? Ow. Template mechanisms don't refactor well, they get replaced.

    I think this is how Linux /etc got to be such a mess. This cycle has been happening continuously and independently on tens, maybe hundreds, of discreet projects.

    Anyway, I said "Rarrr" at the end of my comment to at least acknowledge that it was a self-indulgent rant. Now I feel so bad... :-)

  2. Wally answered me in email, having some issues with the comment system here.

    Unfortunately, there are limitations on the html allowable in comments v. actual blog posts. I hope the formatting here isn't screwed up beyond readability, but here is WWWS (What Would Wally Say):

    ----- begin -----

    The basic attack seems to be:

    "... But you and most of the other anti-clever crusaders have no definition for it other than it made your life miserable or you at least don't like it... "

    Actually there is a simple and concise definition of clever, in the pejorative sense, as applied to software design. In order for a programming method to be clever it must have two distinct characteristics:

    1. It must be non-obvious in regard to its effects. That is to say it must require "special knowledge" to which the average programmer would not have immediate access.

    2. It must at least appear to have been chosen for the explicit purpose of saving work for the ORIGINAL programmer.

    That is to say, non-obvious code that clearly required way more effort to develop and implement than was required is not clever, it is just ugly.

    As an example let's assume that you are working on a program, as I often do, where you have Boolean flags that are read into the program from some external physical device. (Like an I/O card for example.) These signals give your program information about the external environment in which it operates.

    Now a reasonable person would create a class, perhaps a singleton but not necessarily, that would access this external I/O and provide obviously named member functions to allow the programmer to test the status of inputs and set or clear outputs.


    class HardIO
    bool powerOn(void);

    Then everywhere you needed to know if the power was on you would instance the class and test the power on member function.


    void someFunct(void)
    HardIO theIO;
    // do something...

    But all of that work to create a class, control access, and all of that is well, work. So a clever programmer, knowing that the power on bit is the 6th bit (zero referenced) in the first 8 bit input port would just write a routine somewhere to occasionally read the input port into a GLOBAL variable named "portA" and then when he wanted to check the status of the power on signal:

    if(portA & 0x40)
    // do something...

    He would then proceed to copy that "if" statement wherever he needed to check the status of that signal. (Like say 50 places in 12 files...)

    Either method will compile and run just fine, but pity the fool who is trying to make a minor change to the second program and doesn't have immediate access to the electrical schematic. Pity him even more if the electrical schematic is WRONG!!!

    In my book, the second method is "Clever"!!!! It was simpler to write the first time, and a living hell for others to decode later.

    I have been living with source code like this for seven years. I know clever when I see it. (I even get C# programs written this way.)

    So, "wonder what Wally would say" no more.

    ------ end ------

  3. That's a great definition! I think good people (I'll be bold and lump myself in there...) can disagree about what what we should call the intersection of obfuscated and selfish, but we can all feel his pain.

    I think I still have something to say but it's probably best saved for a blog post of my own.

  4. @darrint: if you do, please give us a link here. I'd love to see it.

  5. Here's what I really thought in the end.