Sam Clarke from nFocus spoke at the Birmingham Tester Meetup last night. The topic under discussion was Application Lifecycle Management – an opportunity for testers, or a threat?
I made a few notes during his presentation, so I thought I’d share. I can’t go into too much detail, as he wants to use the material elsewhere, but hopefully you’ll get the idea.
Sam sees the ALM paradigm as a means to shift testing focus away from just the development cycle, and towards other areas that can use the creative thinking that testers bring to the table.
<edit> – Phil (below) brought to my attention that the difference between SDLC and ALM wasn’t immediately apparent from my post. Although I suspect Sam did cover this during the workshop, I didn’t make a note and hence it didn’t get included in the blog.
- SDLC = the delivery of a system or application.
- ALM = the process of managing a system or application throughout its entire lifetime.
The lifecycle of a software asset typically stretches over several phases:
- the original business case – planning and requirements gathering;
- the development phase – including testing and deployment;
- operations – often including further development (bugfixes, patches, enhancements);
- governance – underpins the entire lifecycle.
Testers can add value to the initial business planning phase for example by ensuring that considerations like disaster recovery, failure modes, security, performance etc are addressed from the outset – “exploratory business case testing.”
Equally, governance is an area in which the management team & business policy makers have many good intentions but which often falls by the wayside. An opportunity for testers to step up to the plate and bring serious issues to the attention of decision makers?
What about when the product eventually gets launched and starts to fall under the remit of the operations team? Non-functional requirements are a priority here, as mentioned above. But there’s also plenty of scope for testing to add value by e.g. creating an automation pack that can be run against fixes and enhancements. Running tests against software in the operational environment, analysing logs etc. Applying the knowledge gained while the system was under development in the first place.
Sam then facilitated a taster session from his three part workshop to equip testers with the kind of critical thinking and personal/communication skills required to leverage some of the opportunities he identified.
The main thing that I took away from this is the sense that as a tester I need a pretty broad set of skills and that specialising only in testing disciplines probably isn’t going to be a precursor to a long and successful career. I need to build up my communication and people skills and find ways of delivering value at every stage of the project.
Some of this means changing the standard “how can I break this” tester mindset. In a meeting to discuss the up-front business case for example, people aren’t necessarily going to appreciate in-depth discussion of all the ways in which the product might not work. What might be better received is some tests to gauge market receptivity to the product in the first instance. Sam also made the point that testers have a wealth of expertise in measuring quality and that this insight can be used to help set, e.g. project success criteria.
Of course, much of this ground is already covered by the business analysis function. Hence the question, I guess – ALM; threat or opportunity?
If you want to find out more about the workshop, you can contact Sam Clarke/nFocus for details.
Brummie tester meetups take place every couple of months or so. The next one is loosely planned for late September and you can register your interest here. Videos from previous events (including yours truly!) can be found here.
P.S If you're interested in learning more about performance testing, checkout my Performance Testing 101 course here.