Tuesday, December 8, 2009

Why Blog?

Gus Mueller led me to Dan Wood's article on blogging the other day. I didn't quite get around to reading it until just now, but in the meantime, I semi-subconsciously thought about why I think that blogging is a good thing to do. Dan's reasons, although very good reasons, are not my reasons.

  1. Blogging pushes me to develop my thoughts from un-examined opinions to defensible theses.
  2. Dan hopes future customers will read his blog. I hope other future people find my blog: mentors, co-authors of papers, peers, critics, skeptics, co-workers, people I can mentor?
  3. Blogging exercises my communication skills. A big part of testing is detective work. But another big part is story-telling. I want to get better at telling clear, convincing, entertaining stories.



Wednesday, December 2, 2009

Testing for Experts, or When a Spec is Worth the Bother

@jonbell's Ignite Seattle talk collided with a recent testing adventure of mine. I was really frustrated testing a new feature for one of our apps. About the time I read his talk, (oops, forgot to go in person!) I stopped trying to test the feature and started writing up how I thought this feature was supposed to work. Now I'm not so grumpy anymore.

I used to think there was just testing. Then I broke it out into two types of testing: learning and checking. Now I think there's 3.

I still think there's learning. And I still think there's checking. But there's also learning A to check B.

For many of the features I'm supposed to test, the only written spec is the bug. Sometimes it only has a sentence or two. And that's enough for the engineer to write the feature (mostly) correctly and enough for me to (mostly)test it.[0] Writing more details would just lead to time-wasting, abandoned specs.[1] For example, "Add a 'new folder' toolbar button." The engineers I work with all know how a toolbar button typically works, and so do I. We can make reasonable guesses about what the user expects. I check that it feels predictable[2], shake out a few corner cases, and off we go.

But sometimes we're building expert features. I am not an expert in our users' field. I can't trust my gut to know which operation ought to take precedence when two of them interact. When I realize I'm in this situation, I need to learn first, and then test. This is when I really need a spec. Maybe the engineer needed a spec too, and the bug is already full of notes from him and our PM clarifying how the feature needs to work. Or more often, I need to write my own spec.

"But you just said you don't know this stuff. How can you write the spec?"

I'd say I'm *exactly* the person to write the spec, because I'm the one who knows what questions it needs to answer. If I ask the engineer "How is this supposed to work?" I might get part of what I need to know, but I tend to feel like I'm wasting their time if I camp out in their offices too long. On the other hand, if I write up a short wiki page on how I *think* it's supposed to work, they're often quick to jump in with corrections and clarifying comments.

Developing the spec is the learning step. Now I can pretend to be an expert in the field, and properly check the behavior of the new expert feature.

[0] Honestly, now, when was the last time you *fully* tested something?
[1] Thinking of http://www.joelonsoftware.com/articles/fog0000000033.html
[2] borrowing from Jon