Wednesday, August 22, 2012

All stick, no carrot

The Factory School of software testing follows the ideology that the software world can be sliced up into very neat pieces or categories. All traces of the messy affects humanity can have on a project are removed, you're left with "metrics". Factory is an attempt to simplify or platonify (as Taleb would put it) your world for a variety of reasons. The factory school depends on things like test case pass fail counts, bug count metrics, test estimation, and very reproducible work. Numbers numbers numbers. No people.


My conclusion here, is that this style of test management leads to an overly heavy handed approach to management. I'll illustrate this through some experiences I've had while working for other companies as well as an anecdote from the linkedin forums.


At one company I worked at (for a very short stint) whenever a customer found and problem, the tester who worked on that feature was required to sign a sort of incident report which explained the issue and was an admission of guilt that we tested the feature and apparently didn't do a good job. Interesting thing here is that detailed test cases were required as well as test case reviews. Factory ideology and a culture of blame, together at last!


At another company testers were made to do upfront estimation along with the programmers. We had to estimate down to the half hour the amount of time it would take to write and execute test cases. These were two separate estimations. Estimations were consistently wrong of course (except in come cases where fake testing was occurring) and there were consequences ranging from talks about why that particular person can't estimate correctly to disciplinary actions. Note that there was no reward for doing good estimation even if it were possible.


There was a discussion thread on linkedin talking about the nature of how people run detailed test scripts. The question was whether people run them step by step or based on memory. A qa manager mentioned that she would create a few "rigged" test cases that were designed to fail. In the case that someone ran the rigged test case and it didn't get failed there would be disciplinary action. If that occurred a few times, she would have that person removed from her team somehow.


The common thread I see here is harsh discipline when messy humanity is noticed but also a distinct lack of reward when the desired procedure is followed. This type of environment restricts the serendipity often needed for real innovative and interesting testing. Things like pass fail numbers for test cases or bug counts or lines of code or number of tests don't tell a story by them selves. A story needs context to make sense. A story about software needs a human to make sense.


Love to hear your thoughts and experiences on this!