The degree to which your tests are automated or not, can be a huge factor in how successful your continuous testing strategy is.

Determining the correct split between automated and manual testing will, to some some extent, make or break your releases.

If you have too many automated tests, or your tests are too complicated, then they may take too long to run. When things take too long to run, people pay less attention to them. Fast feedback is a critical success factor for teams wishing to deploy high quality software on a frequent basis.

But, if you don’t have enough automated tests, or if the tests you do have result in intermittent (flaky) outcomes, your team will lack confidence in them; resulting in a higher manual testing burden and extended time to release.

If your stakeholders are to get the feedback they need, when they need it, so they can make appropriate release decisions, how should you deploy your limited testing resources across manual and automated testing to gain greatest benefit?

As with anything, there’s no silver bullet. There are some heuristics or general guiding principles you can follow though, which will point you in the right direction. I’ve attempted to compile at least some of them, below.

While you’re coding

The most obvious place to start developing your automated testing capability is while the product code is being written in the first place.

Retrofitting unit tests to legacy code is problematic. If you’re in a situation where code already exists, you may wish to skip this step in favour of tests at some other layer of your technology stack. But where possible to do so, adding unit tests to your testing strategy is going to pay dividends in the long term – providing your team with increased confidence any time they need to add to or change their code.

If the development team is agreeable, following a TDD (Test Driven Development) approach is pretty much the holy grail of this level of testing. With a TDD approach, your developers will write tests BEFORE the code is produced, helping to guide both the design and the development of your solution, and providing you with the best possible degree of unit test coverage in the process. This is the ultimate win, for all concerned!

Integration testing

Sometimes, unit testing won’t be feasible or appropriate though, depending on the technology and architectural choices that have been selected for your project or product. The next best thing is to follow more of an integration testing based approach, focusing on the areas of your project with the highest levels of integration risk.

Precisely how you approach integration testing will depend on your product architecture. In a classic three-tier (database, business logic, user interface) arrangement, you need to focus on integration between the business logic and database layer, to ensure that correct data is stored or retrieved from your database when specific transactions are performed. For bonus points, start thinking about performance requirements here also, since the transaction times will have some bearing on the success of your project.

A more popular architectural style these days is the micro services model, wherein the various components of system are implemented as individual services. Developing a product this way makes it highly scalable, since it means those services can be instantiated and deployed as needed across e.g. a cloud based infrastructure. From an integration perspective, it means you need to test the integrations between all those services as part of your automation strategy.

Focus on high priority areas

When you’re starting out with your automation strategy, you should work from a set of prioritised tests. The rationale for this is simple: if you’re crunched for time at some point in the future (which of course, is always) - if your tests are prioritised, you can start with the highest priority tests and work down the list of priorities as you have time to do so.

It’s the same thing for test automation. If you don’t have any automated tests yet, start by automating your highest priority manual tests. Don’t do anything else until you have implemented all your highest priority tests. This ensures you have followed Pareto’s 80/20 rule, which — when applied to automated software testing — means that 20% of your tests will provide you with 80% of the needed confidence.

Look for product risk

You might well ask, “how should I prioritise my tests?” And the answer is simple: look for the areas of greatest risk. There are two main kinds of risk we want to consider when implementing an automated testing strategy: product risk and regression risk.

Product risk is likely to be somewhat contentious in terms of test automation, since not all product risks are easy to test for, which means they also won’t be easy to automate. In my experience, product risk relates to one of three or four key areas:

  • Usability - the solution needs to be a pleasurable experience such that the customer or potential customer wants to keep using the product
  • Fit for purpose - the solution needs to solve the problem for which it was designed.
  • Reliability - the product needs to have the customers confidence, particularly as it relates to accessibility [security] and availability [performance].
  • Business - the product needs to meet the requirements of the organisation that created it; often this means it should generate revenue or turn a profit

As you might imagine from the list above, it would be very difficult to write an automated test that answered even one of these questions. A more likely outcome is that you have a number of tests, which may be designed to address some facets of those various areas, such that a stakeholder (e.g. a product manager) is able to look at the results of those tests and get a feel for whether or not, the product will be usable or reliable.

Acceptance test based automation derived from BDD’s (Behaviour Driven Development scenarios) is often used for this purpose, in my experience.

Look for regression risk

The other area of risk which you should consider is regression risk; i.e. the risk that something which was working before has now been broken, or does not work in quite the same way, as a result of some change to an area of the product code. Regression testing is a classic win for an automated approach, since by the time you come to automate a regression test, the functionality should be well understood and ripe for automation.

Catching regression failures by way of test automation is an easy win.

Take performance into account

I mentioned performance above, while discussing integration testing, and also as a product risk. Once you have some automated tests in place, it’s actually pretty easy to monitor the performance of your product, simply by capturing and measuring how long it’s taking to run your automated tests over time.

This does require a baseline of some sort (a set of tests which remains static in terms of scope, the number and execution approach of the tests) - but once you have this, you should be able to take the individual and aggregated time for executing the tests and determine whether the performance of your application has improved, stayed the same, or gotten worse — and take the necessary actions as a result.

Create levels of automation testing

Having a baseline set of tests feeds into this next point, which is that it can be helpful to have suites of automated tests which are executed for specific purposes.

One of the traps I often see teams fall into is that they have a huge number of automated tests, which might take a couple of hours to execute, and they need to execute ALL of those tests in order to determine whether it’s ok to release their product.

Now sometimes, you’re going to have a huge number of tests, and they’re going to take a long time to run, and there may be no way around that; you HAVE TO run all of those tests for reasons you’ve determined within your team, organisation and context. And that’s fine, but what you don’t want to end up in is a situation where it takes say 2hrs to run a bunch of tests and then find out that because a single high priority test has failed, the whole execution needs to be re-run (once whatever the issue was has been fixed).

This goes back to the prioritisation point above. If you’ve prioritised your test cases, then you should be able to split-up your test automation jobs into a series of executions that get progressively more advanced, as you need them to. So you could have something like this, for example:

  • Job 1 = Smoke Test {10x HIGHEST priority tests}. If this passes, move onto Job 2:
  • Job 2 = Acceptance Test {50x tests for new features being deployed in the build}. If this passes, move onto Job 3:
  • Job 3 = Regression Test {400x Highest, High priority tests}. If this passes, move onto job 4.
  • Job 4 = Nightly Build {2000x Highest, High, Medium, Low priority tests covering ALL features and functionality}.

Obviously the specifics and therefore your mileage may vary, but hopefully you get the idea, and the point is this: split up your automation jobs so they increase in scope and duration over time. Don’t START with your longest running tests, FINISH with them. Front-load the risk of your automation failing by running the most important tests first, so you can provide feedback faster in the event something serious is wrong.

Minimise manual testing effort

The goal of all your test automation efforts should be to minimise the amount of manual testing effort required to ship your releases. That doesn’t mean you’re going to eliminate manual testing effort entirely. Nor, in my opinion, should you wish to. Instead, you should be aiming to redirect the efforts of your testers to higher value exploratory testing, monitoring and test leadership activities.

What you don’t want to be in, is a situation where you want to carry out a release but you need weeks of manual feature and regression testing in order to do so. These are exactly the kind of activities that your test automation should be designed to replace. But you should follow the guidelines above, to identify the kinds of tests you SHOULD be aiming to automate, and how to go about doing so.

Equally, you should follow the guidelines below so as not to fall into the trap of attempting to automate tests which will slow down, reduce confidence in, or outright cause your test automation efforts to spectacularly crash and burn.

Early stage development

Trying to automate tests for features still in the early stages of development is a recipe for frustration. You’re much better off waiting until you have something stable to test against. Often, that means the feature will have already been through several iterations of testing and fixing, before it’s ready for an automated test.

Low priority features

Don’t waste time and effort trying to automate low value features. Focus on bigger wins. If you’re not sure whether a feature is high or low priority, check-in with your product owner or manager; they should definitely have an answer for you. I know I would!

Tests that don’t have well defined outcomes

If you don’t know what the outcome of a test is or should be, you shouldn’t be automating it. Unless it’s a performance test or some other kind of analysis (e.g. a static code or security analysis) where the outcome is a measurement or report which may or may not be acted upon; for those things the rules are slightly different. For a functional test, you should definitely know the outcome, and it should be binary (pass/fail) in nature. Anything else is going to lead to problems.

Tests which cannot be fully automated

On the subject of trouble, if your test can only be partially automated, and will as a result need manual intervention to conclude successfully (or unsuccessfully as the case may be) - this is not a good candidate for inclusion in your automated test runs. Use it as a tool, as a script to aid testing by all means, but don’t use it as an automated test.

To conclude, if you’re trying to put in place a continuous testing strategy, automated testing is going to play a big part. You need to be super careful about what tests you choose to automate, and how you go about tooling them. The guidelines above will help you with that.