A few weeks back I posed the question below on Twitter:
It resulted in what might charitably be termed a storm of controversy with noted testing thought leader James Bach (among others) campaigning against the use of Quality Center (QC hereafter) on test projects more or less as a matter of principle. The discussion (or at least some of it) that took place on the 22nd June, can be viewed here.
Although there is undoubtedly some merit in James’ assertion that this problem could be solved by way of increased credibility and my simply refusing to use the tool, I must confess to preferring a slightly more diplomatic approach, particularly where my main source of current income is concerned. Perhaps when I’m an internationally renowned testing consultant I’ll have a bit more freedom to pick and choose who and what I work with, but until then… (Though this view does obviously beg the question, which comes first – reputation or client? I’ll save that for another time though.)
So as things stand, I’m working with an agile team on what is essentially a waterfall project. We deliver our product by way of Jenkins continuous integration and BDD/ATDD style testing with a layer of rigorous exploratory testing on top. A large technology outsourcing company then integrates it with their [government] data processing system. I want to manage my exploratory testing in a Session Based manner. They want me to report on my testing via Quality Center.
What’s a tester to do?
First of all I should probably define what Session Based testing means to me, since the inevitable response to this post will be – “but that’s not Session Based Testing.” A Session Based Test (SBT hereafter) is a test idea/case that has been conceptualised as a mission or charter (or tour if you’re a Whittaker fan) and that will be executed as a discrete test session, or block of test execution time. Test ideas/cases therefore map to charters, charters are executed as sessions, and the session notes are de-facto test results.
My solution to the problem I posed above is relatively straightforward and most likely not new, but I thought I’d document for posterity in any event:
There are clearly some problems to be addressed still. Not only am I combining test tools and approaches here, but the project is mixing it up with software engineering methodologies (agile and waterfall), resulting in a weird blend of both. Hence our agile team is delivering a continuously integrated product against a set of change-controlled requirements, not in sprints but in a big-bang towards the latter end of the development cycle. But I digress.
In step 4 we have the notion of finalised charters. This is where our ALM test management tool starts to rear it’s ugly, constraining head. Clearly in true Session Based Test Management the concept of a finished set of test charters is unlikely to exist. You would instead have something much more like my original mind map wherein a charter leads to a branch which may lead to additional branches or recursions ad-infinitum. This is the true nature of exploratory testing and it simply cannot be managed effectively in Quality Center. Certainly from a tracking perspective, the project test manager had a somewhat horrified expression when I informed him that I would likely add further tests in at a later date [step 6] based on the results of initial test charters. This approach to testing simply doesn’t stack up against the QC’s inflexible reporting constraints.
Additional test status’ are also likely to be required for the individual charters, rather than the rudimentary Pass/Fail/Not Complete/Blocked out of the box variety that QC provides by default. Sami Söderblom provides similar insight here.
Interestingly, Michael Bolton pitched in to the end of my Twitter discussion (see earlier) with his view that [QC style] bureaucracy is unlikely to add value to the project (particularly in relation to the associated [QC] cost), but that it is our clients right to decide on the form of reporting. He neatly summarised what I believe to be the spirit of the dialogue, being that “we’re obliged to offer (at least) more efficient service.”
I’d like to think that I’ve gone some way towards achieving this, given the QC dogma our technology outsourcing partner bring with them. Maybe in the next phase of the project they’ll see the value of SBT and eliminate QC as a reporting tool entirely… But I doubt it.- Simon
P.S. I'll be hosting the next #BrummieTesterMeetup on the 20th November. See you there?