Wednesday, August 22, 2012

All stick, no carrot

The Factory School of software testing follows the ideology that the software world can be sliced up into very neat pieces or categories. All traces of the messy affects humanity can have on a project are removed, you're left with "metrics". Factory is an attempt to simplify or platonify (as Taleb would put it) your world for a variety of reasons. The factory school depends on things like test case pass fail counts, bug count metrics, test estimation, and very reproducible work. Numbers numbers numbers. No people.


My conclusion here, is that this style of test management leads to an overly heavy handed approach to management. I'll illustrate this through some experiences I've had while working for other companies as well as an anecdote from the linkedin forums.


At one company I worked at (for a very short stint) whenever a customer found and problem, the tester who worked on that feature was required to sign a sort of incident report which explained the issue and was an admission of guilt that we tested the feature and apparently didn't do a good job. Interesting thing here is that detailed test cases were required as well as test case reviews. Factory ideology and a culture of blame, together at last!


At another company testers were made to do upfront estimation along with the programmers. We had to estimate down to the half hour the amount of time it would take to write and execute test cases. These were two separate estimations. Estimations were consistently wrong of course (except in come cases where fake testing was occurring) and there were consequences ranging from talks about why that particular person can't estimate correctly to disciplinary actions. Note that there was no reward for doing good estimation even if it were possible.


There was a discussion thread on linkedin talking about the nature of how people run detailed test scripts. The question was whether people run them step by step or based on memory. A qa manager mentioned that she would create a few "rigged" test cases that were designed to fail. In the case that someone ran the rigged test case and it didn't get failed there would be disciplinary action. If that occurred a few times, she would have that person removed from her team somehow.


The common thread I see here is harsh discipline when messy humanity is noticed but also a distinct lack of reward when the desired procedure is followed. This type of environment restricts the serendipity often needed for real innovative and interesting testing. Things like pass fail numbers for test cases or bug counts or lines of code or number of tests don't tell a story by them selves. A story needs context to make sense. A story about software needs a human to make sense.


Love to hear your thoughts and experiences on this!


Tuesday, July 31, 2012

Safety language overview

The topic of safety language has popped up twice lately, once at the CAST 2012 critical thinking tutorial with Michael Bolton and once on Twitter in a tweet from Ben Simo. Safety language is also known as epistemic modality. Epistemic modality translates into modes of knowledge. The word epistemic comes from the philosophy of knowledge, Epistemology.

Anyway, I'm no academic, so here's the practical stuff.

Safety language is a way to vocalize that you are currently thinking critically about something. It is the verbal suspension of certainty. Communicating in this manner shows that you have separated an observation from an inference or judgement about that observation. It is a precise statement that says you do not currently know and need more evidence.

Examples:
could
may
might
should
seems
perhaps
possibly

Think about the difference between these two phrases.
It is going to rain today.
It looks like it might rain today.

To me, the different modal verbs show distinctly different stages of critical thinking. Namely, certainty and uncertainty or suspension or certainty. These examples are pretty simple but they can be applied to observations made when testing software. When performing black box testing, you have a fairly limited scope of what you can observe. You have the UI you are interacting with and possibly some log files and a database (though that may not longer be black box testing at that point...?). At any rate, you are not looking at the code so your understanding of what it actually happening is limited.

Due to the lack of knowledge about what is really happening, when communicating observations it may be wise to be precise about what you do not know. Love to hear your thoughts and experiences in using safety language.

This is soloist Demondrae Thurman playing Slavish Fantasie.




Tuesday, July 24, 2012

CAST 2012 recap

So, unless you've been testing from under a rock you know that CAST 2012 happened pretty recently. This was my first software testing conference and I was super excited to be there. CAST is a pretty small (this year had maybe 175 or so participants) conference that is heavy on the conferring. Anyway, here are some of the things I took home from the experience.

Remembering how much I love testing
Prior to CAST, I had not really spent time around other testers since I moved away from Texas. I mean...I've seen people with the job title but no one really passionate about the craft. Being in a crowd of people whose natural instinct is to question and learn about everything surrounding was energizing. It was like returning home. The whole conference appeared to be designed around this ethos. Talks had built in time for questioning the speaker, workshops had built in time for recaps and discussing what was learned or not learned and unplanned (emerging topics and lightening talks) were put on when people came up with new ideas they wanted to talk about.

I can contribute (and you can too!)
Monday evening after the scheduled talks was a meeting for the AST education special interest group. This group was formed to talk about the SummerQAmp program which Michael Larsen and others have diligently been working on as well as the need for BBST instructors. I've taken the BBST foundations course and completed bug advocacy just before CAST. The result of this EdSig talk was me discovering an interest in being an instructor for the BBST classes. I've signed up for what is currently the last of the BBST courses, Test Design and also registered for the instructors course. I'll post an experience update on this once the instructors course is complete in late October.

There is loooooots to learn
Critical thinking, system thinking, social sciences, observational proficiency, heuristics, and biases. These are just a few of the tools a tester uses daily whether they know it or not. CAST gave me a place to remember these and practice them in a group. The talks gave an academic setting to discuss the topic and workshops and tutorials gave a a place to practice and discuss our observations afterwords. I've now expanded my reading list on many of these topics and am eager to start turning pages.




I'll depart with a nice piece of music by Ennio Morricone called Gabriel's Oboe. David Childs is the soloist.