Category: Software Testing

  • To Automate or not to Automate your Tests?

    To Automate or not to Automate your Tests?

    The degree to which your tests are automated or not, can be a huge factor in how successful your continuous testing strategy is.

    Determining the correct split between automated and manual testing will, to some some extent, make or break your releases.

    If you have too many automated tests, or your tests are too complicated, then they may take too long to run. When things take too long to run, people pay less attention to them. Fast feedback is a critical success factor for teams wishing to deploy high quality software on a frequent basis.

    But, if you don’t have enough automated tests, or if the tests you do have result in intermittent (flaky) outcomes, your team will lack confidence in them; resulting in a higher manual testing burden and extended time to release.

    If your stakeholders are to get the feedback they need, when they need it, so they can make appropriate release decisions, how should you deploy your limited testing resources across manual and automated testing to gain greatest benefit?

    As with anything, there’s no silver bullet. There are some heuristics or general guiding principles you can follow though, which will point you in the right direction. I’ve attempted to compile at least some of them, below.

    While you’re coding

    The most obvious place to start developing your automated testing capability is while the product code is being written in the first place.

    Retrofitting unit tests to legacy code is problematic. If you’re in a situation where code already exists, you may wish to skip this step in favour of tests at some other layer of your technology stack. But where possible to do so, adding unit tests to your testing strategy is going to pay dividends in the long term – providing your team with increased confidence any time they need to add to or change their code.

    If the development team is agreeable, following a TDD (Test Driven Development) approach is pretty much the holy grail of this level of testing. With a TDD approach, your developers will write tests BEFORE the code is produced, helping to guide both the design and the development of your solution, and providing you with the best possible degree of unit test coverage in the process. This is the ultimate win, for all concerned!

    Integration testing

    Sometimes, unit testing won’t be feasible or appropriate though, depending on the technology and architectural choices that have been selected for your project or product. The next best thing is to follow more of an integration testing based approach, focusing on the areas of your project with the highest levels of integration risk.

    Precisely how you approach integration testing will depend on your product architecture. In a classic three-tier (database, business logic, user interface) arrangement, you need to focus on integration between the business logic and database layer, to ensure that correct data is stored or retrieved from your database when specific transactions are performed. For bonus points, start thinking about performance requirements here also, since the transaction times will have some bearing on the success of your project.

    A more popular architectural style these days is the micro services model, wherein the various components of system are implemented as individual services. Developing a product this way makes it highly scalable, since it means those services can be instantiated and deployed as needed across e.g. a cloud based infrastructure. From an integration perspective, it means you need to test the integrations between all those services as part of your automation strategy.

    Focus on high priority areas

    When you’re starting out with your automation strategy, you should work from a set of prioritised tests. The rationale for this is simple: if you’re crunched for time at some point in the future (which of course, is always) – if your tests are prioritised, you can start with the highest priority tests and work down the list of priorities as you have time to do so.

    It’s the same thing for test automation. If you don’t have any automated tests yet, start by automating your highest priority manual tests. Don’t do anything else until you have implemented all your highest priority tests. This ensures you have followed Pareto’s 80/20 rule, which — when applied to automated software testing — means that 20% of your tests will provide you with 80% of the needed confidence.

    Look for product risk

    You might well ask, “how should I prioritise my tests?” And the answer is simple: look for the areas of greatest risk. There are two main kinds of risk we want to consider when implementing an automated testing strategy: product risk and regression risk.

    Product risk is likely to be somewhat contentious in terms of test automation, since not all product risks are easy to test for, which means they also won’t be easy to automate. In my experience, product risk relates to one of three or four key areas:

    • Usability – the solution needs to be a pleasurable experience such that the customer or potential customer wants to keep using the product
    • Fit for purpose – the solution needs to solve the problem for which it was designed.
    • Reliability – the product needs to have the customers confidence, particularly as it relates to accessibility [security] and availability [performance].
    • Business – the product needs to meet the requirements of the organisation that created it; often this means it should generate revenue or turn a profit

    As you might imagine from the list above, it would be very difficult to write an automated test that answered even one of these questions. A more likely outcome is that you have a number of tests, which may be designed to address some facets of those various areas, such that a stakeholder (e.g. a product manager) is able to look at the results of those tests and get a feel for whether or not, the product will be usable or reliable.

    Acceptance test based automation derived from BDD’s (Behaviour Driven Development scenarios) is often used for this purpose, in my experience.

    Look for regression risk

    The other area of risk which you should consider is regression risk; i.e. the risk that something which was working before has now been broken, or does not work in quite the same way, as a result of some change to an area of the product code. Regression testing is a classic win for an automated approach, since by the time you come to automate a regression test, the functionality should be well understood and ripe for automation.

    Catching regression failures by way of test automation is an easy win.

    Take performance into account

    I mentioned performance above, while discussing integration testing, and also as a product risk. Once you have some automated tests in place, it’s actually pretty easy to monitor the performance of your product, simply by capturing and measuring how long it’s taking to run your automated tests over time.

    This does require a baseline of some sort (a set of tests which remains static in terms of scope, the number and execution approach of the tests) – but once you have this, you should be able to take the individual and aggregated time for executing the tests and determine whether the performance of your application has improved, stayed the same, or gotten worse — and take the necessary actions as a result.

    Create levels of automation testing

    Having a baseline set of tests feeds into this next point, which is that it can be helpful to have suites of automated tests which are executed for specific purposes.

    One of the traps I often see teams fall into is that they have a huge number of automated tests, which might take a couple of hours to execute, and they need to execute ALL of those tests in order to determine whether it’s ok to release their product.

    Now sometimes, you’re going to have a huge number of tests, and they’re going to take a long time to run, and there may be no way around that; you HAVE TO run all of those tests for reasons you’ve determined within your team, organisation and context. And that’s fine, but what you don’t want to end up in is a situation where it takes say 2hrs to run a bunch of tests and then find out that because a single high priority test has failed, the whole execution needs to be re-run (once whatever the issue was has been fixed).

    This goes back to the prioritisation point above. If you’ve prioritised your test cases, then you should be able to split-up your test automation jobs into a series of executions that get progressively more advanced, as you need them to. So you could have something like this, for example:

    • Job 1 = Smoke Test {10x HIGHEST priority tests}. If this passes, move onto Job 2:
    • Job 2 = Acceptance Test {50x tests for new features being deployed in the build}. If this passes, move onto Job 3:
    • Job 3 = Regression Test {400x Highest, High priority tests}. If this passes, move onto job 4.
    • Job 4 = Nightly Build {2000x Highest, High, Medium, Low priority tests covering ALL features and functionality}.

    Obviously the specifics and therefore your mileage may vary, but hopefully you get the idea, and the point is this: split up your automation jobs so they increase in scope and duration over time. Don’t START with your longest running tests, FINISH with them. Front-load the risk of your automation failing by running the most important tests first, so you can provide feedback faster in the event something serious is wrong.

    Minimise manual testing effort

    The goal of all your test automation efforts should be to minimise the amount of manual testing effort required to ship your releases. That doesn’t mean you’re going to eliminate manual testing effort entirely. Nor, in my opinion, should you wish to. Instead, you should be aiming to redirect the efforts of your testers to higher value exploratory testing, monitoring and test leadership activities.

    What you don’t want to be in, is a situation where you want to carry out a release but you need weeks of manual feature and regression testing in order to do so. These are exactly the kind of activities that your test automation should be designed to replace. But you should follow the guidelines above, to identify the kinds of tests you SHOULD be aiming to automate, and how to go about doing so.

    Equally, you should follow the guidelines below so as not to fall into the trap of attempting to automate tests which will slow down, reduce confidence in, or outright cause your test automation efforts to spectacularly crash and burn.

    Early stage development

    Trying to automate tests for features still in the early stages of development is a recipe for frustration. You’re much better off waiting until you have something stable to test against. Often, that means the feature will have already been through several iterations of testing and fixing, before it’s ready for an automated test.

    Low priority features

    Don’t waste time and effort trying to automate low value features. Focus on bigger wins. If you’re not sure whether a feature is high or low priority, check-in with your product owner or manager; they should definitely have an answer for you. I know I would!

    Tests that don’t have well defined outcomes

    If you don’t know what the outcome of a test is or should be, you shouldn’t be automating it. Unless it’s a performance test or some other kind of analysis (e.g. a static code or security analysis) where the outcome is a measurement or report which may or may not be acted upon; for those things the rules are slightly different. For a functional test, you should definitely know the outcome, and it should be binary (pass/fail) in nature. Anything else is going to lead to problems.

    Tests which cannot be fully automated

    On the subject of trouble, if your test can only be partially automated, and will as a result need manual intervention to conclude successfully (or unsuccessfully as the case may be) – this is not a good candidate for inclusion in your automated test runs. Use it as a tool, as a script to aid testing by all means, but don’t use it as an automated test.

    To conclude, if you’re trying to put in place a continuous testing strategy, automated testing is going to play a big part. You need to be super careful about what tests you choose to automate, and how you go about tooling them. The guidelines above will help you with that.

    Did you enjoy this article? Watch the video recording below:

  • Test-Planning-Simplified-Webinar

    Test-Planning-Simplified-Webinar

    As part of my Product Manager duties with TestRail, I delivered a test planning webinar based on my Test Planning Simplified article.

    You can watch the video below, or via the TestRail site, where you’ll find more webinars in this series and other useful content.

  • Cunning Strategies for Getting out of a Testing Rut

    Cunning Strategies for Getting out of a Testing Rut

    One of the things you’ve probably noticed when you’ve been software testing for a while, and particularly when you’ve been testing the same product for any length of time, is that your brain starts to settle into some established ways of thinking about the software you’re testing.

    After a while you already know where lots of the more challenging areas of your product are, and when you begin to do some testing you make a beeline for those areas because they’ve already proven themselves to be the most fertile hunting grounds for juicy bugs.

    But how did you get into that area of they system? What route did you follow, and what might you have missed along the way?

    What if instead of making that beeline – you branched out in a completely different direction?

    What might you find instead?

    Getting your groove on

    The brain is a funny thing. Incredibly powerful of course, but also with annoying tendencies to get stuck in ruts or grooves of comfortable thinking.

    In fact – when you’re carrying out your software testing, your brain is probably finding ways to think fast (lazily) rather than slowly (deliberately, analytically, creatively) much the time. And the temptation will be to let it, because, well – that’s just what it does, right?

    In the field of evolutionary psychology there’s a growing body of evidence that serves to demonstrate how our brain can deceive us.The list of cognitive biases (deceitful shortcuts in our thinking) is a long one – but sadly, just being aware of them isn’t a complete solution.

    As professional software testers we have to not only be aware of our biases, we have to take control of them and constantly challenge ourselves to break out of comfortable patterns of thinking.

    Imagine you have a piece of paper and you make marks with a pen on that surface. The surface records the marks accurately. Previous marks do not affect the way a new mark is received. Change the surface to a shallow dish of gelatin. You now put spoonful’s of hot water on to the gelatin. The hot water dissolves the gelatin. In time, channels are formed in the surface. In this case previous information strongly affects the way new information is received. The process is no different from rain falling on a landscape. Streams are formed and then rivers. New rain is channeled along the tracks formed by preceding rain. The gelatin and landscape have allowed the hot water and rain to organise themselves into channels or sequences. – Edward de Bono. “Think!.” Random House, 2009

    Those channels, sequences or grooves that de Bono talks about are exactly what we need to break out of in order to deviate from established paths or sequences when you’re testing a piece of software.

    We have to break out of our biases, old or comfortable ruts – busting out of the established groove and opening up new channels for thinking about and looking at the systems and software we’re paid to test.

    Pattern interrupts

    One way of doing this that de Bono talks about in his same book (Think!), is the use of random words, in order to steer thought away from those established ruts and patterns. His technique is actually very simple:

    • You already have or identify a focal point for your thinking.
    • You choose a random word.
    • You use the random word as a launching point for thinking creatively about the focal point

    So by way of an example – let’s say the focal point of your thinking is a grooming meeting, and you want a new, more creative way of thinking about the stories being presented. At random – you choose the word “Absent”.

    All you need to do now is follow the new direction the word takes your thinking in. So in the case of a grooming meeting, you might start thinking about or asking questions like:

    • What’s missing or absent from this story? Performance requirements? Security requirements?
    • What if a specific piece of data is absent or missed due to, e.g. User error?
    • What if a step is missed or becomes absent?
    • What if a piece of architecture becomes absent (or fails over)?

    You might come up with many more ideas depending on your circumstances, product and team. But hopefully you get the idea.

    The results of using this tool can be very powerful, because now instead of following a familiar route (train of thought) and arriving at the usual destination (conclusion, opinion etc.) – you’re liable to end up somewhere completely different, and probably via a very different route to what you’ve been used to.

    De Bono has a selection of words he suggests you use for this process, nouns typically – like the following: Letter, Barrier, Ear, Tooth, Bomb, Soap.

    But the context he’s using is slightly different. He’s coming from a creativity angle. Testing is absolutely a creative discipline, but perhaps there’s some other words we can use to jog our thinking instead… Heuristics

    Many folk, particularly within the Context Driven Testing community like to talk about heuristics and mnemonics. Powerful reminders that can be used as frames of reference and to steer testing efforts towards areas of risk. Often they’ll come in the form of a checklist, or a mind map, or a cheatsheet.

    For this technique though, we don’t need any of that. Just a list of the keywords will do, like the list of product quality attributes below:

    1. Sequence
    2. Concurrence
    3. Confluence
    4. Synchronisation
    5. Share
    6. Interaction
    7. Continuity
    8. Hierarchy
    9. Priority
    10. Dependency
    11. Repetition
    12. Loop
    13. Parameter
    14. Prerequisite
    15. Configuration
    16. Rule
    17. Customise
    18. Constraint
    19. Resource
    20. Access
    21. Lock
    22. State
    23. History
    24. Rollback
    25. Restore
    26. Refresh
    27. Clone
    28. Temporary
    29. Trace
    30. Batch
    31. Void
    32. Absent
    33. Feedback
    34. Saturate
    35. Sort
    36. Scale
    37. Corrupt
    38. Integrity
    39. Invoke
    40. Timing
    41. Delay
    42. Customers
    43. Information
    44. Developer
    45. Team
    46. Tools
    47. Schedule
    48. Deliverables
    49. Structure
    50. Functions
    51. Data
    52. Platform
    53. Operations
    54. Time
    55. Capability
    56. Reliability
    57. Usability
    58. Scalability
    59. Performance
    60. Compatibility

    Whenever you feel stuck (in an established groove, rut or pattern of thinking), you can just pick one of the words from the list and use it to generate some ideas for new ways to carry out your software testing.

    I’ve been testing a repayment calculator recently. So I’ll use that as the basis for a couple of examples, just to get you started:

    When I looked at the clock, I saw the second’s hand was pointing at 41, Delay:

    • What happens if I delay the user actions? Say I go off to grab a coffee while filling out the various stages? Does my session expire? Do I get logged out? Does the application do anything to protect sensitive information from prying eyes?
    • What happens when the server is under load? Are responses delayed?

    Let’s try a different one. This time, I got 47, Schedule. I’m still working with the same application:

    • The application gives me the option to schedule repayments. I’ll explore that for a bit. Does the schedule work?
    • Can I re-schedule once I’ve submitted a plan?

    When I tried again, I got 11, repetition:

    • What happens if I repeat the calculation? Do I get the same result?
    • What if I keep repeating the same step? Am I able to do so? Should I be able to do so? Do I see any errors?

    And so on and so on. You can use this technique whenever you need to. It’s important that the selection is random though – otherwise your brain will just choose words you feel comfortable with, which is not going to achieve the desired effect of taking you off in a completely new direction.

    Watch the clock

    For randomness you can use another technique straight out of De Bono’s playbook.

    The more observant among you will have noticed there are 60 words. That means you can use the second hand on your watch, or whatever other clock you have to hand. Just glance at the time, make a note of the number of seconds, and use the matching word to bust you out of your testing groove.

  • The Testing Hero’s Journey

    The Testing Hero’s Journey

    The video of my 2020 Romanian Testing Conference talk, The Testing Hero’s Journey, can be found below. Full disclosure, the talk was originally planned as a 2hr workshop. But then COVID happened. So it got compressed down to a 30min talk instead.

  • Test Planning Simplified

    Test Planning Simplified

    In my experiences of planning for testing in various different environments, and across a number of teams and organisations, the value of that planning was never the document itself; rather, it was the thoughts and consideration of the activities, resources and potential risks and issues yet to be discovered.

    Thinking deeply about all of those things and writing them down, in whatever form the final document took, was the real value of the exercise.

    Having those thoughts documented can of course be useful when you need to communicate the testing activities across a team of subordinates. It’s not quite so useful for communicating your intentions as they relate to testing activities to your stakeholders.

    It’s far better to have those people in the room with you while you’re formulating your plan, so that they can provide you with the information you need to do that planning; and so that they have a say in the what, when, who, where, how of your test plan.

    Pro-Tip: When writing a test plan, keep the 5 W’s front of mind: What, When, Who, Where, hoW.

    • What will (and won’t) be tested? – The scope of your testing activities.
    • How will it be tested? – The technical details of your testing; approaches, tools, environments, data, automation etc.
    • Who will test it? – What human resources will you need to keep the testing on track?
    • Why does it need testing? – Your rationale for committing time and resources to testing specific areas, over others. What are the risks to be addressed?
    • When will testing start and finish? – What are the entry and exit requirements for your testing? How long will it take?

    In ye-olde days of testing, I used to spend substantial amounts of time pulling together lengthy master test plan documents, in the region of 30-60 pages long (depending on the complexity of the project). Always with the sneaking feeling that they would probably not be read with the same amount of care I put into writing them.

    Thankfully, with the ascent of more agile/lean ways of working, stakeholders have somewhat less patience for documentation of this nature. And they would often prefer and expect to be directly included in the planning process.

    With that in mind, a leaner test planning and documentation mechanism is preferred. So I present to you 3x options for collaboratively documenting your test plan below.

    Mindmapping Your Test Plan

    I’ve found mindmapping generally to be an invaluable tool during the course of my software testing career. Whenever I have a problem or a piece of analysis to do, mindmaps – whether in digitial or notebook (pen & paper) form are my goto tool for exploring the problem space. And it’s the exploration of the test plan problem space that I’d happily recommend using a mindmap for also.

    Here’s how I’d approach writing a test plan document in mindmap form, step-by-step;

    [1] Start with a central node. What needs to be delivered? What is the outcome your stakeholders are looking to accomplish? This should form the locus of your testing efforts.

    [2] From this central point, create branches for the other key components of your test plan:

    • Testing scope – What will you address with your testing (in scope)? What will you not address (out of scope)?
    • Timescale – When will the testing start & finish?
    • Testing resources – Who will do the testing? What will they need? Where will they do it?
    • Testing approaches – How will the testing be carried out?
    • Risks & assumptions – What obstacles can you foresee? How will those be addressed?

    [3] From those branches, drill down further into the various items and activities. For the scope, you can take the requirements, features or stories as being the next level down (sub-branches of the Testing Scope branch); and once you have those, you can drill further down into specific test cases, scenarios, exploratory sessions or whatever is needed depending on your preferred testing style.

    [4] Do the same thing with all the other nodes and branches until you have enough detail for a test approach which stands up to some level of scrutiny from your stakeholders and team.

    Pro-Tip: Don’t aim for perfection with your test plan. Whatever form it takes. Expect your stakeholders and team to probe the initial plan with some questions. You can update and revise it, and that process is much less painful if you haven’t already convinced yourself that the test planning is “done”. You should consider it a working document, subject to constant revision and updates based on the progress of your testing.

    Once done, you will have ended up with a document that may look something like this, or completely different depending on the specifics of your approach and context. Either way is fine. As I mentioned earlier, the value of the test plan is in the planning (i.e. the thinking), not the document.

    For super bonus points, you should create your mindmap collaboratively with the key stakeholders for the project or deliverable. Walk through all your thinking and rationale with them. Design the test plan together in a workshop. Give them the opportunity to add their own thoughts and ideas, and to buy into your approach as a result. The implementation and execution of your test plan will go more smoothly as a result.

    Here’s a few more examples of mindmapping in the testing spaces:

    The Single Page Test Plan

    A similar approach is to use (or attempt) a single page test document.

    In terms of content, you’ll probably find that taking this approach covers much the same ground as using a mindmap – since really; what is a mindmap other than a stylised set of indented lists?

    Some people don’t like reading or looking at mindmaps though, finding them confusing or otherwise difficult to get their heads around. Using a simple one-page document addresses this objection, while keeping your test plan nice and lean still.

    The scope of your document and the sequence of steps to be followed is virtually identical too:

    [1] Identify the questions to be answered or addressed by your test plan (remember the 5 W’s). Use those as your section headers. You’ll probably end up with something like the following (looks familiar, huh?):

    • Testing scope
    • Timescale
    • Testing resources
    • Testing approaches
    • Risks & assumptions

    [2] Capture the necessary information on your document. Ideally in bullet point form, but provide further information (diagrams, models, tables etc) as needed.

    You might (and in fact will likely) discover that your test plan doesn’t quite fit onto one page. Don’t worry about it. So long as the intention is to minimise the amount of extraneous information, you’ll be fine. Just make sure you capture the necessary information your stakeholders and testers need to enact the various items on the plan.

    Pro-Tip: Use a text editor to capture the plan details rather than a formatting heavy tool like Word or Google Docs. Using a simple text editor and marking up your text will save space and preserve your intent to keep the doc down to a single page.

    I’ve found that this is a good way to facilitate collaborative test planning as well. Get everyone into a room (virtually, or physically) and project or share the document you’re working on. Write the document as you’re collaborating. Capture the ideas of your entire team and any stakeholders in the room. Reflect their thoughts back to them as words on the page. If you follow this approach, you’ll start finding it very easy to get support from people, since the document contains their own words, as captured during your collaborative test planning session.

    The Testing Canvas

    Another iteration on the same line of thought is the testing canvas.

    I’ve never been a huge fan of canvases personally, but they seem to have a lot of traction in the lean and agile worlds – so I’ll go along with it where necessary.

    On the assumption that you have a fairly well defined set of sections for your test plan (such as the ones I’ve already mentioned a couple of times above), mapping those sections onto a testing canvas is a trivial task. Here’s a few examples to give you an idea of what I mean:

    Again, the key benefit of following this kind of an approach is to provide a good mechanism for people to collaborate on the test planning. So, you could do this in several ways depending on what works best for your team:

    • In a meeting room, using a white board to create the sections, and Post-It notes for the activities
    • Create a spreadsheet with the various sections, and collaborate on that in a virtual space

    Or you could use some other tools. Trello for example.

    Test planning simplified

    I’m a big fan of this quote from Helmut von Moltke:

    “No plan survives first contact with the enemy.:

    I especially like Mike Tyson’s version of the same truism:

    “Everybody thinks they have a plan, until they get punched in the face.”

    In the era of Modern Testing, you don’t need a lengthy test plan document to express your intent over the course of an iteration, or even an entire project. Most often, you just need to be sure you’ve identified what needs to be done, by when and whom, how it will be done and what resources you’ll need to do so.

    More often than not, those ideas can be captured quickly and easily using a leaner form of documentation such as the ones above.

    As I mentioned at the start of this article, the plan itself isn’t the important thing. It’s the thinking that goes into the plan that’s important. The plan itself is at best an intermediary document capturing a statement of intent from a specific point in time. It will almost certainly change as the project evolves and more information surfaces.

    Rather than creating a huge document which goes out of date almost the moment it is completed, focus on the creation of smaller, leaner pieces of documentation which can more easily be updated when needed.

    And above all, collaborate. All of the approaches I mentioned above can be used to facilitate meaningful collaboration between the person(s) responsible for steering the test effort, and the people with a deep interest in the outcome of that effort.

    What’s more, using mindmapping tools in particular work very well in the online meeting space – since people can very clearly see the mindmap evolve during the course of a discussion. The same can be said of the one-page plan and the testing canvas, but the mindmap is a much more visual tool. Using something like XMind for example will give you the ability to demonstrate relationships between various items in the plan quickly and easily, and to call them out using graphical elements.

  • 7 Things Awesome Testers do That Don’t Look Like Testing

    7 Things Awesome Testers do That Don’t Look Like Testing

    If you’re supervising testers, or if you have testers on your team or in your organisation, you probably see them doing a whole bunch of stuff that doesn’t look much like testing…

    • That conversation you see them having with the developer, product owner or scrum-master. Not testing.
    • The time they spent researching a new emulation tool. That wasn’t testing either.
    • All that effort put into building and configuring environments. Not testing. Well, maybe a little bit of testing. But more like a preamble to the main event.
    • Oh yeah – and that presentation they did for their team to the department. You guessed it. Not testing.

    Obviously testers are paid to test. I mean, the clue’s in the job title – right?

    So why are they doing all this other, non-testing, stuff? And is it adding any value?

    Let’s take a look and see shall we?

    Talking

    Prior to my illustrious career in the world of I.T., I worked in a number of other roles where talking was definitely not seen as a value adding activity. In a number of them in fact, it would have been seen as a value diminishing activity instead. You may be inclined to think likewise when you see your tester having one-to-one or one-to-many conversations within your team. But look a little harder.

    Talking with, questioning, enquiring of and sometimes even interviewing developers, product owners, other testers and members of the team is a vital part of your testers work. Great testers will use this time to drill deep down into the work that’s being carried out establishing the what, how, when, why and whom of stories and on the results of those conversations, how they need to test.

    Some testers don’t even need to touch the keyboard to find a defect. They’ll be able to do it simply by asking your developers the right questions.

    Relating

    Being able to have the kind of conversations that lead – if not necessarily to defects being identified or fixed before they even become a problem – to powerful insights, requires some effort up-front to build a decent working relationship. The wheels need to be greased a bit.

    Awesome testers spend time developing their social skills. They understand that teams are complex systems and are wary of the intended and unintended consequences of their interactions with the system. They might even go so far as to treat it like another application under test, making small changes to their relational behaviours and observing the results.

    Then they’ll go ahead and process the feedback to ensure they’re speaking to people at the right time, in the right place, with the right information to hand – to make sure they get the best results from their interactions.

    Writing

    Not all teams fix bugs on the basis of a conversation or a Post-It note. Sometimes your tester will need to write a defect report. They’ll need to know how to get the required information across in a clear and succinct fashion. Learning to write a decent report takes some effort.

    Sometimes those reports will need to be a little longer. Your tester might need to write a strategy, or a test process or programme report. That report might get circulated both in and outside of the organisation and be seen with varying levels of interest by any number of stakeholders. Having the ability to write and communicate pertinent information in a compelling fashion is a skill. It takes some effort to acquire and develop.

    Having a tester who is willing to invest the time and effort in learning how to write properly is a bonus, because they’ll not only write better, they’ll think more clearly too.

    Thinking

    Of course, writing isn’t the only tool your tester can use to sharpen their thinking. They’ll probably have a toolbox full. If you poke around inside the toolbox, you’ll find some heuristics, mnemonics and creative prompts.

    You’ll find your tester has many thinking hats and that those hats help her to approach problems from a number of different directions depending on the context in which she’s working.

    Like your developers, the most important work is taking place inside of your testers head – long before they ever touch a keyboard.

    Learning

    Most human achievements began with a test of some description, so you can expect your tester to be enthusiastic in learning about the world around them. In a professional capacity, that’s likely to mean desires to learn about the organisation, the domain, the technologies, architecture and specifics of the software under test for starters. But there may be some other passions too.

    One of the traits of great testers is a willingness to follow their nose. To pursue their quarry wherever it may lead. Sometimes the hunt will lead them some place amazing, and they’ll discover philosophies, insights and other valuables that will drive their testing skill to new heights. Other times, the hunt may take them down a hole from which they need rescuing.

    Don’t ever quench their passion for the hunt though. Because when they find something good, it won’t only benefit them. It has the potential to make your team and product better too.

    Sharing

    The best testers have learned to share their kills. To bring them home and divide them amongst the team. Above and beyond that, if they’ve developed their communication skills sufficiently they may be willing to share their learnings with the wider organisation, their local, national or international tech communities.

    Turning what they’ve learned into blogposts, articles, presentations, workshops or other learning platforms, as well as applying their learning in the day job, just reinforces the value of their learning at a personal level – and helps to build up the people with whom they work.

    Building

    If your tester isn’t looking for ways to improve the speed, scope and efficiency of their testing efforts on a daily basis – then you probably hired the wrong person. Increasing the breadth and depth of their testing will be a natural consequence of your tester wanting to learn more about the software, system and architecture under test – so of course they’ll want to build tools to help them do it.

    The tools they build may not look like how you expect though. Your tester might leverage some existing platform or toolset, extending its capabilities or repurposing it to their needs. Or they might develop some customised data to be injected into the application for more effective testing. They may develop a script or a tool from scratch that helps extend their own testing skills, scaling them up so more testing can be carried out faster than ever before.

    Their tools may not even be software related. The tools of your tester may well be more facilitative than hands on. Watch them though and encourage them to develop a toolkit that complements their skillset.

    Testers should develop skills in many areas

    It may not have occurred to you before that these are all skills that your testers can develop and that will serve them well in their testing efforts. Each one of them adds significant value to the project on which they work, your team, your organisation and when shared outwards – with the testing and tech community as a whole.

    But not only that – they’ll benefit them personally. Each of these skills is broadly applicable not only in a work context but in life. They’ll help them to be a better person, a better friend, a better partner, a better human being. And because they’re so broadly applicable – they can take them anywhere they need them. From role to role, organisation to organisation.

    They’re valuable assets. Testers should look for opportunities to acquire them, and to apply them whenever they can. I’ve listed a few starting points below. If you have some further suggestions I’d love to hear about them in the comments.

    • Talking – Karen Johnson delivered an excellent talk on The Art of Asking Questions at TestBash 2015 that exemplars questioning, interviewing and of course listening techniques
    • Relating – It’s not quite the same thing, but if you think about software testing from an anthropological or social science perspective – then you arrive at the work of Huib Schoots and John Stevenson.
    • Writing – James Bach and Jerry Weinberg have both had much to say about the practice and benefits of developing a good writing habit.
    • Thinking – John Stevenson has delivered a number of workshops and written an entire book on the psychology of software testing.
    • Learning – Check out James Bach’s book Buccaneer Scholar for more on the benefits of a constantly evolving and unbounded learning philosophy.
    • Sharing – There are various communities within which the sharing of tools, tips, techniques, approaches and thinking is rife. A great place to start is The Ministry of Testing.
    • Building – James Bach / Michael Bolton – have written a paper on automation in testing that anyone who aspires to develop and build testing tools or simply do more effective testing should read.
  • Automating PDF’s and Windows Objects with Python and Webdriver

    Automating PDF’s and Windows Objects with Python and Webdriver

    At my current gig I needed a way to check the print styling wasn’t broken across a range of pages and browsers. It was an obvious candidate for automation and, since I hadn’t had much of an opportunity to build my Python skills – decided to write the script in Python.

    I envisaged the script as being relatively straightforward. Using Webdriver I would insruct the browser to go to the pages we wanted to check, execute the print function and then check the output. Of course, we didn’t want all of those prints to actually end up in the printer. So the first step was to identify a solution that would enable us to print to PDF. Although CutePDF Writer allows you to print to PDF by default, it doesn’t allow you to just save the pdf file. So instead I ended up using NovaPDF, which allows you to setup a custom profile and save the pdf straight to a predefined directory.

    Having done that, I was able to implement the following code, which sets-up the Firefox Webdriver instance with a “always_print_silent” profile. This means that when the print function is activated, it won’t open any kind of dialogue. It will just print to whatever the default printer driver has been set to.

    The script imports all of the URL’s we want to check via a CSV file. Once the browser is open, it navigates to all of the URL’s in the file, calls the javascript window.print() function, and due to the “always_print_silent” profile, saves the resulting output with the help of NovaPDF.

        # script relies on having novaPDF (www.novapdf.com) installed in order to print to PDF
        # AND configure the PDF printer to silently save to a pre-defined location
    
        # selenium imports
        from selenium import webdriver
        from selenium.webdriver.common.keys import Keys
        # csv reader import
        import csv, time
    
        # need to setup a webdriver profile so that the print dialog screen is skipped
        FFprofile = webdriver.FirefoxProfile()
        FFprofile.set_preference('print.always_print_silent', True)
    
        # create driver with the profile
        driver = webdriver.Firefox(FFprofile)
    
        # open the CSV file
        with open('data/barnetPrintURLs.csv') as csvfile:
            urlReader = csv.reader(csvfile, delimiter=',', quotechar='|')
    
                #loop through the CSV and check all the URL's
            for row in urlReader:
                driver.get(row)
                # execute javascript to print the page
                driver.execute_script("window.print()")
                time.sleep(10)
                
        driver.quit()
    

    So far so good.

    Next up was Chromedriver, and things started to get a bit more complicated – since Chromedriver doesn’t support silent printing. 🙁 This meant that every time the window.print() function was called I ended up with a Windows print dialogue. I can’t interact with the dialogue window inside of Webdriver, so I needed some other solution.

    Fortunately, Python provides some tools with which to accomplish this task.

    SWAPY, or Simple Windows Automation on Python which provides a Python interface to Windows objects. In much the same way as you might once have been able to identify objects using,  e.g. Quick Test Pro – you can use the SWAPY interface to interact with Windows programs and convert actions into Python code, which can then be implemented in a script by calling the pywinauto library.

    In the screenshot below, you can see I’ve selected the Print dialog, selected the &Print function (the Print button) and generated some pywinauto code in the Editor window.

    pywinauto code in editor window

    Having utilised SWAPY to identify the dialog and the actions needed to ineract with it, I just needed to incorporate those actions into my Python script.  That’’s just a matter of installing the pywinauto library (and sendkeys, and the Microsoft C++ compiler which is only compatible with Python 2.* – see code comments) and adding some additional code to my script to deal with wait conditions etc, below:

    # selenium imports
    from selenium import webdriver
    from selenium.webdriver.common.keys import Keys
    # additional Chromedriver specific import
    from selenium.webdriver.chrome.options import Options
    # Chromedriver doesn't support silent printing - so we need to interact with Windows using pywinauto nauto.pbworks.com/w/page/9546218/Installation
    # (which also requires sendkeys https://pypi.python.org/pypi/SendKeys/0.3 
    # and Microsoft Visual C++ Compiler for Python 2.7 http://www.microsoft.com/en-us/download/details.aspx?id=44266)
    import pywinauto, time
    
    # need to setup a webdriver profile so that the print dialog screen is skipped
    chrome_options = Options()
    chrome_options.add_argument("--disable-print-preview")
     
    # create the pywinauto object
    pwa_app = pywinauto.Application() 
    
    # get the url's
    with open('data/barnetPrintURLs.txt', 'r') as urls:
    	# for each line in the file navigate to the url
    	for line in urls:
    		# create driver with the profile
    		driver = webdriver.Chrome(chrome_options=chrome_options)
    		driver.get(line)
    
    		# execute javascript to print the page
    		driver.execute_script("window.print()")
    		
    		# now use pywinauto to interact with the Windows dialog and click the print button
    		try:
    			a_check = lambda: pywinauto.findwindows.find_windows(title=u'Print', class_name='#32770')[0]
    			try:
    				dialog = pywinauto.timings.WaitUntilPasses(5, 1, a_check)
    				window = pwa_app.window_(handle=dialog)
    				window.SetFocus()
    				ctrl = window['&Print']
    				ctrl.Click()
    				# need an explicit wait to allow the print to go through so we can quit the browser instance
    				time.sleep(5)
    			except:
    				print('Something went wrong')
    	
    		finally: 
    			driver.quit()
    

    In much the same way as the Firefox script, this just runs through the URL’s, navigates to the page, and activates the print function. We then have to switch to pywinauto to interact with the Windows print dialog, hit the print button and wait for the dialog to close and the print to actually be actioned, before closing the webdriver instance and starting the next loop.

    I also wrote a script to carry out the same functions in IEdriver. It follows much the same format (with a couple of additional implicit waits and checks for IE quirks) so I haven’t bothered pasting it here.

    Phew. My simple scripting exercise was a lot more complicated than I originally thought. Thankfully Python provides a lot of flexibility for doing this kind of stuff. I imagine this would also have been achievable using C# using .Net, but doubt very much whether I would be able to do this in Java or Ruby. If somebody has done this in another language, I’d be very interested in hearing about it just so I can learn how you went about it.

  • Correlating Dynamic Values in JMeter

    In previous posts I have covered:

    If you’ll recall from that last post, the login script wasn’t working yet. The actual login request (i.e. the submission of the login credentials as a request to the server to initiate a logged-in session) was failing because we weren’t providing it with all of the information it needed.

    send params with request image from JMeter

    In the Send Parameters With the Request section of the HTTP Request sampler, request login, above, we can see that there’s an AppActionToken that looks as though it’s been generated by the server, probably to uniquely identify the session. If we continue to scroll down the list of parameters, we’d see that there are a number of other tokens that are required in order to successfully login:

    appActionToken = 1m8mf7N5vmsDvbmwR42h5gcGufAj3D
    openid.pape.max_auth_age = ape:MA==
    openid.ns = ape:aHR0cDovL3NwZWNzLm9wZW5pZC5uZXQvYXV0aC8yLjA=
    openid.ns.pape = ape:aHR0cDovL3NwZWNzLm9wZW5pZC5uZXQvZXh0ZW5zaW9ucy9wYXBlLzEuMA==
    prevRID = ape:MTFESk4zWkIwQzhSSjg5SlQ0SjA=
    pageId = ape:Z2JmbGV4
    openid.identity = ape:aHR0cDovL3NwZWNzLm9wZW5pZC5uZXQvYXV0aC8yLjAvaWRlbnRpZmllcl9zZWxlY3Q=
    openid.claimed_id = ape:aHR0cDovL3NwZWNzLm9wZW5pZC5uZXQvYXV0aC8yLjAvaWRlbnRpZmllcl9zZWxlY3Q=
    openid.mode = ape:Y2hlY2tpZF9zZXR1cA==
    openid.assoc_handle = ape:Z2JmbGV4
    openid.return_to =
    ape:aHR0cHM6Ly93d3cuYW1hem9uLmNvLnVrL2dwL3lvdXJzdG9yZS9ob21lP2llPVVURjgmcmVmXz1uYXZfc2lnbmlu
    

    In addition to the dynamic parameters, we also need to submit the username and password, which are currently hardcoded as can be seen below.

    hardcoded username and password

    We could parameterise these also, but I’ll talk about that another time.

    So how do we go about obtaining the correct values for these parameters, such that when we send them all, along with some valid login credentials, we get a logged-in session back from the server?

    Well, the first thing is to figure out from whence they came. Experience has taught me that it’s usually (though by no means always) from the immediately preceding server response. So let’s go back to the results of our test and take a look:

    test results

    The HTML response isn’t telling us much… We need to switch to text mode, and then take a look for the parameter name. Let’s start with the appActionToken parameter:

    test result text mode

    Voila! We’ve found the token, and the value. But if we run the test again, we’ll probably see a different one:

    token found

    It’s a fair (in fact guaranteed) bet that we’ll find the rest of our parameters embedded within this response too.

    So what we need to try and do is extract the parameters from the server response, each time we get one – and then pass it into the next request. Performance testers call this process correlation. JMeter provides us with the Regular Expression Extractor so that we can go ahead and correlate our parameters from one request/response to another.

    The first step is to add a Regular Expression Extractor to the request login page HTTP Sampler, by right-clicking on it and then selecting Add > Post Processors > Regular Expression Extractor, thusly:

    regex extractor

    Next we need to write some Regex with which to extract the parameter.

    If, like me, the idea of writing regex makes your toes curl with horror, don’t worry. I’ll share a special piece of JMeter goodness with you. It’s the only piece of regex I’ve ever really needed to know. And it goes like this:

    (.+?)
    

    Did you get that? I’ll repeat it just in case you missed it…

    (.+?)
    

    To use this regex and get the parameter we’re looking for, I reckon something like the string below should work:

    name="appActionToken" value="(.+?)" /
    

    Trying it out in the RegExp Tester view of the response, shows that it will indeed work, since the test came back with only one match:

    regex extracted

    The RegExp Tester shows me how many matches the regex pattern will create, what the match is, and the value to be extracted – 1, 2 & 3 below respectively:

    Match count: 1
    Match[1][0]=name="appActionToken" value="pmHS1gy9iYMSJOWBIPlCWZGq1SIj3D" /
    Match[1][1]=pmHS1gy9iYMSJOWBIPlCWZGq1SIj3D
    

    What I’ve done here is, take the HTML response, and apply the simple bit of regex I described above, in order to capture only the string that we’re interested in.

    <input type="hidden" name="appActionToken" value="pmHS1gy9iYMSJOWBIPlCWZGq1SIj3D" /><input type="hidden" name="appAction" value="SIGNIN" />
    

    The “pmHS1gy9iYMSJOWBIPlCWZGq1SIj3D” bit, basically.

    I’m not going to go into how the regex works, because that’s beyond the scope of this particular post. What we do need to do now is plug it into the regex extractor so that we can use it in the request login sampler.

    Here’s how the finished extractor looks:

    regex config

    The important things to note are these:

    1. I’ve given the Regular Expression Extractor a meaningful name – appActionToken Extractor
    2. I’ve given the variable into which I want to put the extracted token a meaningful name (in the Reference Name field) – appActionToken
    3. The regular Expression field contains our regex – name=”appActionToken” value=”(.+?)” /
    4. The Template, Match No and Default Value fields are filled out more or less as per the Apache JMeter guide.

    You can learn more about the Template, Match No and Default Value fields by reading through the online guide from Apache here. I’d recommend sticking with the defaults, but you may gain some mileage in experimenting with them.

    Having extracted the value successfully and placed it in a JMeter variable, we now need to use the variable in the submit login sampler. We can do that by referencing the variable where the token was originally.

    JMeter knows we want to use a variable when we place the name inside curly parenthesis with a leading dollar sign – ${variableName}, like the below:

    using extracted regex value as a variable

    With that done, we’re almost ready to go. Except, our script still won’t work because there’s those other 10 dynamic variables we need to correlate as well. Fortunately, the process is exactly the same for all of them:

    Locate the variable in the preceding response – the request login page response in the case of our script. Construct the regex to extract the variable – as discussed above. Create a Regular Expression Extractor for each variable to be correlated. Refer to the extracted variable in the subsequent request – the submit login page in our example, as discussed above.

    Once you’ve gone ahead and done all of that, you’ll likely end up with something that looks like the below:

    test plan tree

    And, assuming that you’ve done everything correctly, running the test again will result in the desired logged-in response from the server:

    working test result

    We can see above the JMeter is logged in by the “Hello, jmeter” message in the response.

    We’re done! Our login script is now ready to rumble. And having covered the basics, we’re ready to tackle some more advanced JMeter topics. So stay tuned for those…

  • Recording a Login Script with JMeter

    Following on from my last couple of posts where I covered initial setup and use of the HTTP(S) Test Script Recorder, I’m going to build on what’s been done so far in order to develop a login script.

    I’m assuming that you have a test plan setup more or less identical to the below, in which case we’re good to go. If you don’t – then I suggest you read through the preceding posts before continuing further.

    test plan structure in JMeter

    The thing we need to do next is figure out what requests need to be sent in order to simulate a user login. We could try and craft them from scratch, but it’s easier to simply record what happens when I carry out a login and then modify the recorded requests to make them resilient and reusable.

    To do that – we first need to make sure that:

    The JMeter proxy is recording our requests The browser being used is directing traffic to the proxy Again if you’re not sure how to do steps 1 & 2 above, I refer you to the previous post. Assuming that your proxy is recording properly, then we can go ahead and click on the login button and see what happens:

    Amazon sign in button

    Once you’ve done so then, under the Recording Controller you should see some activity. When I wrote this, I saw the responses below:

    recording controller

    Further examination of some of the recorded requests suggests they’re irrelevant to the task at hand. I have no idea what the request below is doing for example:

    what is this request doing?

    If I actually worked at Amazon I could probably go and ask one of the developers what’s happening here, but for the time being I’m going to assume I don’t need it.

    Of more interest to me is the /ap/signin/… request:

    app/sign-in request

    I’m intrigued by the /ap/uedata/ request also, but I’ll ignore that one too for the time being. In the meantime, I’m working with the hypothesis that the /ap/signin/… request is the one that actually requests the login page from the server.

    I’ve added another Simple Controller under my Thread Group. In order to test out my hypothesis I can move that request to the Controller and run it to see what happens. I’ll disable the homepage navigation (right-click on the Controller > Disable) as well, since we don’t need that right now. Disabling it will prevent any child requests from being executed.

    simple controller

    I’ve also moved the HTTP Header Manager that was originally under the HTTP request, and placed it under the Test Plan instead. Every recorded request will be paired with one of these, but we only need one of them to act as a default for the entire test. All subsequent header managers can be discarded, otherwise they’ll just clutter things up.

    Running the test confirms my hypothesis, since I observe the following result:

    results tree

    I’ll rename the request to something more meaningful and add a Response Assertion to check for some text (“What is your e-mail address?”), so we know the request is doing the right thing going forward.

    response assertion

    Now we need to submit some actual login details. Again, the best thing will be to submit an actual login and see what the proxy recorder tells us about the process. I did that while writing this post and saw the responses below:

    responses

    The /ap/signin request seems like it’s probably the one doing the work, and closer examination shows that the request contains the username and password I used to login with.

    http request with username and password

    We can move that request up into our Test Plan as well, so it looks like the below, adding an assertion to check we’re actually logged in after making the request also. Let’s test for some specific HTML that we would only expect to see when we’re logged into the site. Something like “Your Browsing History” ought to do it.

    additional response assertion

    All good. We can go ahead and run that now, right?

    Wrong.

    There’s a problem. Can you tell what it is yet?

    what’s the problem?

    The assertion has failed because JMeter didn’t get the page back that it expected. In fact, judging by the page we did get – it doesn’t look as though the login has worked at all.

    Why not?

    If I take a look at the request again, there’s a few clues as to why…

    closer examination of http request

    The request is sending a token along with the (currently hard-coded) login credentials.

    It’s sending a bunch of other stuff over as well by the looks of it.

    http request params

    In all, there’s actually 11 dynamic variables that need to be correlated across from the previous server response in order for the login to be considered a valid request by the server.

    It’s going to take a bit of effort to get all that sorted out… I’ll show you how it’s done in the next post.

We use cookies in order to give you the best possible experience on our website. By continuing to use this site, you agree to our use of cookies.
Accept
Privacy Policy