Friday, August 14, 2009

The Boutique Tester - ReDux

A few weeks ago I put out a post on my other blog called "The Boutique Tester"; it has received a decent amount of commentary and feedback, including a mention on the Watir Podcast.

I've got a few ideas to flesh out the details, the business model, and discuss people who come close to embodying the ideals of the Boutique Tester. The question is: Where do I put them? This blog? My other blog which focuses on testing - Creative Chaos? On one and link them back and forth?

I'm interested in your thoughts.

Thursday, April 23, 2009

"Worse is Better"

When I was new in my career, I read a lot of books about programming craft. One common theme of the books was perfectionism - and analysis paralysis. That a team would want to get the requirements (or design) "right", and so, would not actually product anything. I never saw it; I thought it was a myth.

Then I went to work for a big company. Oh. My. Goodness.

I met people who talked a great game yet, in a period of years, had never actually shipped any software. I'm talking about /nothing/. Years later, some of them are senior managers and executives. I'm still not quite sure I understand how that works. I have a clue, but that's not what this post is about.

I realized that -- somehow -- my value system was different. I wanted to get something out there, learn from it's failings, adjust it, and make it better - instead of trying to get everything right "up front." Reading things like Peopleware or the Agile Manifesto convinced me that I was not the only person that felt this way.

The best (short) explanation of this I have ever read is "Worse Is Better" by Richard Gabriel; here is an excerpt:

I and just about every designer of Common Lisp and CLOS has had extreme exposure to the MIT/Stanford style of design. The essence of this style can be captured by the phrase ``the right thing.'' To such a designer it is important to get all of the following characteristics right:

* Simplicity-the design must be simple, both in implementation and interface. It is more important for the interface to be simple than the implementation.

* Correctness-the design must be correct in all observable aspects. Incorrectness is simply not allowed.

* Consistency-the design must not be inconsistent. A design is allowed to be slightly less simple and less complete to avoid inconsistency. Consistency is as important as correctness.

* Completeness-the design must cover as many important situations as is practical. All reasonably expected cases must be covered. Simplicity is not allowed to overly reduce completeness.

I believe most people would agree that these are good characteristics. I will call the use of this philosophy of design the ``MIT approach.'' Common Lisp (with CLOS) and Scheme represent the MIT approach to design and implementation.

The worse-is-better philosophy is only slightly different:

* Simplicity-the design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design.

* Correctness-the design must be correct in all observable aspects. It is slightly better to be simple than correct.

* Consistency-the design must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either implementational complexity or inconsistency.

* Completeness-the design must cover as many important situations as is practical. All reasonably expected cases should be covered. Completeness can be sacrificed in favor of any other quality. In fact, completeness must sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface.

Early Unix and C are examples of the use of this school of design, and I will call the use of this design strategy the ``New Jersey approach.'' I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach.

However, I believe that worse-is-better, even in its strawman form, has better survival characteristics than the-right-thing, and that the New Jersey approach when used for software is a better approach than the MIT approach.

Of course, you can read the entire section yourself - it's taken from a much longer paper titled LISP: The Good News, The Bad News, And How To Win Big.

Now, the mental attitude of perfectionism is entrenched; you can't just forward someone a link to a Gabriel paper and expect them to change their attitude or behavior. Asking them to change behavior would be, in effect, asking them to change their value system.

The one approach I have had some success with is waiting six months and asking "So, how's that working for you?" then asking how long it would take if the team just ... did it and adjusted?

It's hard to argue for six more months of designing the process for a reporting and analysis package when you could create the reports and put them in the hands of the customer in less than a month.

Failing at that, I talk about fail fast, lean thinking, and waste.

UPDATE: I've found that one section I quoted, treated historically, to be a some of the greatest prose ever written on software development dynamics. Jamie Zawinski, the creator of Netscape Navigator, quotes just that section on his website. Certainly, the LISP folks actually shipped things. I did not mean to compare them to other people of a different character who were /totally/ stuck in analysis paralysis.

Monday, March 30, 2009

Low Quality Software

I just made the following post to the Software Craftsmanship Google Group:

On Mar 29, 11:11 pm, Klaus Hebsgaard wrote:
> I totally agree, but for us in this mailinglist it is very clear what
> craftmanship is, and why it is good.
> But most people don't understand that there is high quality and low quality
> software.


If I go to wall-mart and buy a low-quality shirt - sure, it may look a little odd and a button may fall off, but, for the past, I can wear it, and it is suitable as clothing.

If I buy wall-mart quality /software/, on the other hand, the risk is that it simply does not work.

I'm fond of this quote, by John Ruskin:

"It's unwise to pay too much - but it's worse to pay too little. When you pay too much you lose a little money, that is all. When you pay too little, you sometimes lose everything, because the thing you bought was incapable of doing the thing you bought it to do. The common law of business balance prohibits paying a little and getting a lot. It can't be done. If you deal with the lowest bidder, it's well to add something for the risk you run. And if you do that, you will have enough to pay for something better!"


Friday, March 6, 2009

Development Katas

As many people in the software craft movement have pointed out, it's very hard to focus on reflective improvement while you are coding. In fact, the nature of your work (maintenance, time pressure) may make it hard to develop specific skills like design or clarity of code.

So why not do specific exercises outside of work, designed with the intent to sharpen our thinking - perhaps in public under scrutiny?

Micah Martin has proposed such "katas", based on martial arts. It's interesting stuff and there's video.

What are your favorite developer (or maybe test) katas? I intend to make a list.

Friday, February 13, 2009

Robert C. Martin On Craft

We invited "UncleBob" Martin to GLSEC last year, to give his talk on Ethics and Craftsmanship as the opening keynote.

If you missed it, well, you missed out. However, Bob recently gave the talk at JAOO and it was recorded; you can watch the video right now on for free. Enjoy.

Wednesday, February 11, 2009

Outliers - III

Last Month I mentioned the great body of literature on airline crashes and how they happen. The problems are similar to ones we experience in business: The project is late, the staff is working too hard and taking shortcuts - management applies pressure, and a series of not-that-bad-by-themselves mistakes builds up until, eventually, you get a crash.

What's interesting is that the pilot is not working in isolation. On any commercial flight, the pilot will have someone qualified, albeit junior, sitting next to him - the co-pilot. What the co-pilot is responsible for, exactly, is a matter of philosophy. Some people view the co-pilot as sort of a double-checker; others view him as merely a body-double to fly the plane in case the pilot is somehow incapacitated. In the US, we typically go for the former, and it turns out this distinction is extremely important.

You see (obviously), when you have someone actively engaged in checking your work, few errors get through. We see this in software development, too, we call it "pair programming."

Notice I said that the pilot is junior. Now the issue of culture really begins to set in, because some cultures insist on deference from juniors.

So how can you, as a junior, express deference, but also point out that the pilot is making a mistake, that we are about to fly into a rainstorm and that our radar is on the fritz?

It turns out you use something called mitigating speech. In outliers, Gladwell uses the example of your boss asking you to do something over the weekend, verses your asking him.

In some places in North America, your boss might just swing by your cubical at 4:30PM on Friday and say "I need this on my desk monday morning." But if the tables were turned -- if you needed him to approve some document and send an email, you'd say "hey, boss, if it isn't too much trouble, could you take a look at this over the weekend and email your approval? That'd be great."

You're using mitigated speech. Gladwell lists six different levels of speech, from Command ("Deviate thirty degrees right") to Obligation ("I think we need to turn right") to suggestion ("maybe we should turn right") to query ("which direction would you like to deviate?) to preference and finally, to hint: "That return at twenty-five miles looks mean."

Yet, to the casual listener, mitigated and "no big deal" sound exactly the same.

So you try really really really hard to express deference "hey boss, could you please maybe think about" - in the hope that you'll be a team player - and nothing happens. Nothing changes. The boss thought it was no big deal.

This is especially problematic in high-ceremony cultures, where the decision has already been made behind closed doors. The boss would lose face to change the policy, and, besides, Joe's a team player - it's no big deal.

Until you run into the broad side of the mountain.

This isn't just speculation; countries that have a strong culture of deference have, historically, had a great many more aircraft accidents. On the order of seven times as many.

In avionics circles, the solution is something called "Crew Resource Management", which Gladwell explains is designed to give the crew the skills to communicate assertively, including specific ways to escalate if the boss isn't getting the hint. The fundamental assumption in Crew Resources Management is that we are going to make mistakes, and need to be vigilant to check and correct each other.

The basic escalation pattern is:
"Captain, I'm concerned about"
"Captain, I am uncomfortable with"
"Captain, I believe the situation is unsafe."

Before you weigh in on a subject, ask yourself how your mitigating language could be mis-understood as "no big deal." And, if it's an ethical or moral issue and the other person isn't getting it, consider escalation language.

It might not make you friends, but you might just avoid the cliff.

Monday, February 2, 2009

What is elegant code?

David Starr recently interviewed me for his podcast, "Elegant Code", and just put the interview up on the web today. The audio quality is a little choppy in parts, but I'm more interested in your feedback on the quality of the ideas.

You can download the MP3 directly here.

Tuesday, January 27, 2009

The Agile Aptitute Test

About two weeks ago, I sat in on a panel on Agile Development with Menlo Innovations co-founder Richard Sheridan; today Lisamarie Babik, Menlo's Cheif evangelist gave a talk on extreme interviewing at XPWestMichigan.

I couldn't make XPWM either, so I did the next best thing - I asked an editor at for permission to interview Lisamarie Babik and Richard Sheridan on the subject of the "Extreme Interview." The result is The Agile Aptitude test.

It's not quite a good as being there, but it's close, and, I have to say, a lot cheaper than a plane ticket to Grand Rapids.

Wednesday, January 21, 2009

New Interview up

I just completed an interview with on my career and it's intersection with writing and public speaking. The interview is up here, and, as a Frequently Asked Questions list on guerilla marketing for the aspiring craftsperson, it ain't half bad.

I should say that I'm very happy with the interview as produced. At the same time, our discussion and the draft version had a little more ... humility in it. Yes, I stand in a pretty good position today, and yes, I could lose all of it tomorrow, and yes, when I speak, I try to keep that in mind.

Monday, January 12, 2009

The flip side of expertise

(I've got outliers bookmarked. I will come back to it, really. Now on to my post - I'm not sure where I'm going with this, and I've found that that type of expedition is often the most insightful. Please take this with a grain of salt ...)

Over the past few years, I have seen this pattern:

1) Developer writes moderately-complex system,
2) Over time, developer moves on
3) and a series of maintenance programmers touch the code,
4) The code changes, sometimes hacks, sometimes decent. These changes tend to each do one specific thing - and sometimes introduce unintended consequences and side-effects.
5) Time passes. 3 and 4 repeat. Eventually ...
6) The cumulative effect of these changes is technical debt; the code is brittle and expensive to change. Eventually, every change seems to introduce a problem at least as big as the change.

"How do you get out of the mess" is what a great deal of the tech debt literature suggests. Besides prevention, here are some of the options which haven't been explored very much:

1) Get acquired by another company that has the same type of system. Migrate to that system.

2) Declare bankruptcy. Stop supporting that business process, do things manually, etc. Sometimes, if this is the main line of the business, it involves actual bankruptcy.

3) Hire a genius. By this I actually mean someone with an IQ in the 140+ range. Sometimes, a "steward" who has stayed with the company for 5+ years and watched the system evolve can do this with an IQ in the 120-130 range.

I'm not sure how I feel abut option three. The problem with the genius programmer is that they really can track far more variables in their head at one time than the typical person - so they can continue to hack, stab, tear apart, and revise the brittle software and succeed at it.

A genius programmer can take a piece of software on it's last leg and keep it going for a few more years.

I suppose, if you are able to force the programmer to follow the Boy Scout Rule - to make each change over time make the code better - that could be a very good thing. Potentially, that programmer could pull the code out of the rabbit hole.

Next Question: If you are a programmer, should you aspire to be the genius?

Friday, January 9, 2009

This is how we do it ...

Our VP of Product, Adina Levin, presented how we develop software at Socialtext to the Silicon Valley Product Management Association yesterday. Details on my other blog, Creative Chaos, here.

Thursday, January 8, 2009

Wha the Bleep do we know?

My wife and I watched "What the Bleep do we know" Tuesday night. It was really interesting - a somewhat documentary with a story wrapped around it and a huge amount of special effects. Some of the conclusions were a little new-agey, but if you are a mature grown-up, I'm sure you can handle it.

Last night I watched the special features. Usually, I avoid them (the scenes that were cut were, it turns out, generally cut for a reason) - but I just wanted to hear more about the concepts.

One of the things that hit me was the budget. This was a movie originally budgeted at $250,000 as a documentary that ran to a total cost of $5 million.

That's two thousand percent over budget.

If it were a piece of software, according to the Chaos Report, it would be a failed project.

Yet wikipedia reads:

According to Publishers Weekly, the movie was one of the sleeper hits of 2004, as "word-of-mouth and strategic marketing kept it in theaters for an entire year." The article states that the gross exceeded $10 million, which is referred to as not bad for a low-budget documentary, and that the DVD release attained even more significant success with over a million units shipped in the first six months following its release in March 2005.

Keep in mind, 2005 was four years ago and only one year after theatrical release. Certainly, by now, the original investor has more than made his money back - mostly likely several multiples.

It's pretty hard to call that a "failure."

When we assess our failures in software development, it might be better if we looked at the overall outcome, instead of the initial budget.

Late projects don't mean the project failed - they mean that whoever did the up-front estimation probably failed at that singular task. Now, that is a bad thing, because the business made decisions based on unreasonable hope.

But it's not the only thing.

More Outliers to come.

(Hey, check it out, What the bleep is available for free on Google Video. I'd forgotten how cheesy the intro is. Man, you'll have to wade to at least the 5 minute point to get to the good stuff.)

Tuesday, January 6, 2009

How we gain expertise

Kathy Sierra did a 30-minute presentation at The O'Reilly Emerging Technology Conference on how we gain expertise - and the audio is available from ITConversations for free.

You canListen to it for free here.

Outliers - II

Later on in the book, Gladwell examines airplane crashes - and how they happen. Listen to this:

In a typical crash, for example, the weather is poor - not terrible, necessarily, but bad enough that the pilot feels a little more stressed than usual. In an overwhelming number of crashes, the plane is behind schedule, so the pilots are hurrying. In 52 percent of crashes, the pilot at the time of the accident has been awake for twelve hours or more, meaning that he is tired and not think thinking sharply. And 44 percent of the time, the two pilots have never flow together before, so they're not comfortable with each other. Then the errors start - and it's not just one error. The typical accident involves seven consecutive human errors. One of the pilots does something wrong that is not by itself a problem. Then one of them makes another error on top of that, which combined with the first error still does not amount to a catastrophe. But then we have a third error on top of that, and then another and another and another and another, and it is the combination of all these errors that lead to disaster.

Does any of that sound familiar? The project starts out in a negative climate - perhaps the stakeholders each have a different agenda. The project is late, so the technical team is hurrying. They aren't checking each other. They are working overtime, so they are tired. They begin to make mistakes - each, individually, won't kill the project, but when you add them up, they mean that when the code is delivered to test (or worse, to the customer), nothing works.

Now, in the past decade or so, a lot of shops have improved quality to the point that the story about is the exception - or even the stuff of legends. Hope springs eternal. Still, it gives a very strong strong argument for pair programming: After all, you wouldn't fly in a jet airplane without a co-pilot in the cockpit - why would you allow your business-critical applications to be developed without one?

More Gladwell:

These seven errors, furthermore, are rarely problems of knowledge or flying skil. It's not that the pilot has to negotiate some critical technical maneuver and fails. The kinds of errors that cause plane crashes are invariably errors of teamwork and communication. One pilot knows something important and somehow doesn't tell the other pilot. One pilot does something wrng, and the other pilot doesn't catch the error. A trick situation needs to be resolved through a complex series of steps - and somehow the pilots fail to coordinate or miss one of them.

We have the exact same problems in software development.

In my experience, on software projects, the problems are rarely technical. Instead, they are communications problems. The right people might know the right things, but fail to communicate it to the implementors, the architects, the testers, or the deployers. Somewhere in the mix, key elements get lost, forgotten, and lead to delivering software that doesn't meet customer needs, is buggy, late ... or possibly, all three.

Gladwell comes up with a few reasons that this happens; we'll talk about that tomorrow.

Monday, January 5, 2009

Outliers - I

When I think of Software Craftsmanship, I initially think of works like teach yourself programming in ten years or Richard Gabriel's Master's of Fine Arts In Software or perhaps Worse is Better.

But Craft applies to a lot more than software, and, every once in awhile, something totally outside of that realm with "smack me upside the head."

I'm reading Outliers: The Story of Success by Malcolm Gladwell. I expected it to be great writing combined with interesting stories that didn't really apply to me - thankfully, I was right on the first count and wrong on the second.

About a third of way through the book, Gladwell begins writing about what makes work meaningful - and long it takes to generate skill in meaningful work. Here's an example:

"Those three things - autonomy, complexity, and a connection between effort and reward - are, most people agree, the three qualities that work has to do have if it is to be satisfying. It is not how much money we make that ultimately makes us happy between nine and five, it's whether our work fulfills us. It I offered you a choice between being an architect for $75,000 a year and working in a tollboth for $100,000 a year, which would you take? I'm guessing the former, because there is complexity, autonomy, and a relationship between effort and reward in doing creative work, and that's worth more to most of us than money."

Autonomy, Complexity, and a Connection Between Effort and Reward.

Autonomy, Complexity, and a Connection Between Effort and Reward.

That may not define software craftsmanship, but it's a good start.


Hello, folks. I've been developing, managing, or testing software products since 1997. In that, I've seen a number of principles, patterns, practices, fads, good ideas, bad ideas, and ideas that solved problem X (which you may or may not have) come and go.

Around 2001, in graduate school, I started a hand-rolled HTML website that became Excelon Development. WAIT - Don't Follow that link! It's yucky, hand-rolled HTML that I was doing part-time in Graduate School while developing full-time during the day. I didn't exactly put a ton of time into it!
After starting the XNDEV site, I did a blog that I hand-edited the HTML of - in VIM - on the UNIX command line. Yeah. You can still read it here. And, for awhile, I maintained a blog on

Then, finally, in November of 2006, I started Creative Chaos - a blog that explores issues in software development with a systems thinking and testing slant - but my interests are not limited to stictly testing.

Lately, I've been interested in Technical Debt - this strange addiction cycle by which we feel pressue to hit a deadline, do a bad job, and set ourselves up for more pain next time we touch the code.

You can break the problem down into two parts: The ignorant, who don't know how to develop software well, and the talented but under pressure, who often fear for their jobs or personal safety - these people know better, but feel they have some excuse not to act. (A third category people generally bring up is "bad management" but I'm not sure I buy it. It is the technical person who makes the hack - not the manager. How can it be the manager's fault when that is not the person writing or testing the code?)

While I think the metaphor is helpful, I believe it expresses the problem - not the solution. One solution I find attractive is the idea of Software Craftsmanship. Craftsmanship provides training for the apprentice, which can solve the problem of ignorance, and a social system for the journeyman or master, which can solve the problem of pressure from the boss by counter-balancing it with pressure from a group of peers.

So I'm starting this blog, which will cover dynamics in software projects, lessons learned, agile development, upcoming projects, and so on. It will be a little more code-focused than Creative Chaos. Hopefully, this blog will allow audiences who are solely interested in one area or other the other to read without disappointment.

Ideally, of course, you are interested in both! :-)

Let's learn together.